Posts by Ion Păvăloiu

Abstract

We study the local convergence of a Aitken-Steffensen type method for approximating the solutions of nonlinear scalar equations. We show that under some usual assumptions on the nonlinear function (involving monotony and convexity), the considered method generates bilateral approximations for the solution. Therefore, one obtains an evaluation of the error at each iteration step.

Authors

Ion Păvăloiu
(Tiberiu Popoviciu Institute of Numerical Analysis)

Keywords

?

PDF

Scanned paper.

Scanned paper also freely available on the journal website.

Latex version of the paper (soon).

Cite this paper as:

I. Păvăloiu, Monotone sequences for approximating the solutions of equations, Bul. Ştiinţ. Univ. Baia Mare Ser. B Mat.-Inf., 15 (1999) nos. 1-2, pp. 103-110.

About this paper

Publisher Name

Universitatea Baia Mare

Print ISSN

Not available yet.

Online ISSN

Not available yet.

References

?

Paper (preprint) in HTML form

Monotone sequences for approximating the solutions of equations

Bul. Ştiinţ. Univ. Baia Mare, Ser. B,

Matematică-Informatică, Vol. XV (1999), No. 1–2, 103–110

Dedicated to Professor Ion PĂVĂLOIU on his 60th anniversary

Monotone sequences for approximating the solutions of equations

Ion Păvăloiu

1. Introduction

We shall consider in the following the Aitken-Steffensen-like methods and some conditions under which they generate bilateral sequences for the approximation of the solutions of the scalar equations.

Let I=[a,b], a<b, be an interval of the real axis and consider the equation

(1) f(x)=0,

where f:I. Let moreover,

(2) xg1(x) =0
xg2(x) =0,

with g2,g1:I be other two equations.

We shall assume that if x¯ is a root of (1), then it also satisfies both equations from (2).

The Aitken-Seffensen method consists in the construction of the sequences (xn)n0, g1(xn)n0,(g2(xn))n0 generated by the following iterative process:

(3) xn+1=g1(xn)f(g1(xn))[g1(xn),g2(g1(xn));f],n=0,1,,x0I,

where [u,v;f] denotes the first order divided difference of f at the points u and v.

The second order divided differences of f will be denoted by [u,v,w;f].

In this paper we shall show that in the study of the convergence of the sequences generated by (3), an important role is played by the hypothesis

of convexity on the function f. We bring some completions and specifications to the results obtained in [5][7].

Concerning the convexity and the monotonicity of the functions we shall consider the following definitions (see, for example, [3, pp. 288–299 and p. 327]).

Definition 1.1.

The function g:Iis called increasing (nondecreasing, decreasing, resp. nonincreasing) on the interval I if for all x,yI, it follows that [x,y;g]>0 (0,<0,resp. 0).

Definition 1.2.

The function g:I is called convex (nonconcave, concave, resp. nonconvex) if for all x,y,zI it follows that [x,y,z;g]>0(0,<0,resp. 0).

Some of the usual properties of the convex functions will be used in the following, and we remind them without proof (see, e.g. [3, pp. 288-299]).

Denote sgx0(x)=[x0,x;g],xI{x0}, the slope of the function g at x0. The following results hold:

Proposition 1.1.

Let g:I be an arbitrary function and x0I.

  1. (1)

    if g is convex on I then sgx0 is increasing on I{x0}.

  2. (2)

    If g is nonconcave on I, then sgx0 is nondecreasing on I{x0}.

Proposition 1.2.

If g:]a,b[ is nonconcave, then g admits the left derivative gl(x) and the right derivative gr(x) at any point x]a,b[. Moreover, the functions gl(x) and gr(x) are nondecreasing on ]a,b[ and gl(x)gr(x) for all x]a,b[.

Proposition 1.3.

If g:I is a convex function on I then

  1. (1)

    the function g is continuous at any point xint(I);

  2. (2)

    the function g satisfies the Lipschitz condition on any compact interval contained by I;

  3. (3)

    the function g is derivable on I excepting a subset of I at most countable.

Proposition 1.4.

Let g:int(I). The following statements are equivalent:

  1. (1)

    the function g is convex on int(I);

  2. (2)

    for any xint(I) there exists the left derivative of g at x, gl(x), which is finite and is increasing as a function on int(I);

  3. (3)

    for any xint(I), there exists the right derivative of g at x, gr(x), which is finite and is increasing as a function on int(I).

Taking into account the properties expressed in Propositions 1.11.4, we are interested in the present note to simplify the hypotheses requested in [5][7]. As we shall see, the convexity properties of the function f from equation (1) play an essential role in the construction of the functions g1 and g2 from (2).

2. The monotonicity of the sequences generated by the Aitken-Steffensen method

We shall consider the following hypotheses concerning the functions f, g1 and g2:

  • (a)

    the function f is convex on I;

  • (b)

    the functions g1 and g2 are continuous on I;

  • (c)

    the function g1 is increasing on I;

  • (d)

    the function g2 is decreasing on I;

  • (e)

    equation (1) has a unique solution x¯I;

  • (f)

    for any x,yI it follows that 0<[x,y;g1]1.

Concerning the convergence of the sequences (xn)n0, (g1(xn))n0 and (g2(g1(xn)))n0, the following result holds.

Theorem 2.1.

If the functions f,g1,g2 satisfy conditions (a)–(f) and, moreover,

  • i1.

    the function f is increasing on I;

  • ii1.

    there exists x0I such that f(x0)<0 and g2(g1(x0))I,

    then the sequences (xn)n0, (g1(xn))n0,(g2(g1(xn)))n0 generated by (3) with the initial approximation x0 considered above, have the following properties:

  • j1.

    the sequences (xn) and (g1(xn)) are increasing and bounded;

  • jj1.

    the sequence (g2(g1(xn)))n0 is decreasing and bounded;

  • jjj1.

    limxn=limg1(xn)=limg2(xn)=x¯

  • jv1.

    the following relations hold:

    xng1(xn)x¯g2(g1(xn)),n=0,1,
    max{x¯xn+1,g2(g1(xn))x¯}g2(g1(xn))xn+1,n=0,1,
Proof.

Since f is increasing on I,f(x0)<0, and x¯ is the unique solution of f(x)=0 on I, it follows that x0<x¯. By c) and f), for x<y we get g1(y)g1(x)yx. Now, for y=x¯ one obtains xg1(x)0 when x<x¯ and xg1(x)0 when x>x¯. By c) and x0<x¯ it follows g1(x0)<g1(x¯), i.e. g1(x0)<x¯. Since x0<x¯, one gets x0g1(x0). By d) and g1(x0)<x¯ it results g2(g1(x0))>g2(x¯), i.e. g2(g1(x0))>x¯. By i1) and g1(x0)<x¯ it results f(g1(x))<0. Hypothesis i1) also implies [g1(x0),g2(g1(x0));f]>0, whence, by (3), one obtains x1>g1(x0).

It can be easily verified that the following identities hold for all x,y,zI:

(4) g1(x)f(g1(x))[g1(x),g2(g1(x));f]=g2(g1(x))f(g2(g1(x)))[g1(x),g2(g1(x));f]
(5) f(z)=f(x)+[x,y;f](zx)+[x,y,z;f](zx)(zy).

Since g2(g1(x0))>x¯, it follows f(g2(g1(x0)))>0 and using (4) one obtains x1<g2(g1(x0)). Now, if in (5) we set z=x1, x=g1(x0),y=g2(g1(x0)) and we take into account (3) we get

f(x1)=[g1(x0),g2(g1(x0)),x1;f](x1g1(x0))(x1g2(g1(x0))).

But f is a convex function, so f(x1)<0 and consequently x1<x¯. Summarizing, we have obtained the following relations

x0g1(x0)x1<x¯<g2(g1(x0)).

It‘ remains to prove that x1 satisfies hypothesis ii1., and the above reasoning may be repeated.

Since g2 is decreasing, g1 is increasing and x0<x1, the following inequalities are true: g1(x0)<g1(x1), g2(g1(x0))>g2(g1(x1)).

From x1<x¯g2(g1(x1))>g2(g1(x¯)), i.e. g2(g1(x1))>x¯, which shows that g2(g1(x1))I.

Consider now xnI with f(xn)<0 and g2(g1(xn))I. If in the above reasoning we take x0=xn we obtain

(6) xng1(xn)<xn+1<x¯<g2(g1(xn)),n=0,1,,

and so the affirmations j1,jj1 and jv1 of the theorem are proved. In order to prove jjj1 we denote l1=limxn, l2=limg1(xn) and l3=limg2(g1(xn)) and we shall prove that l1=l2=l3=x¯. Indeed, by (6) and (b) we get

l1g1(l1)l1x¯g2(g1(l1)),

i.e. g1(l1)=l1 and so l1x¯g2(l1). Since f is convex on I, Proposition 1.3 assures that f is continuous in l1, and by (3), passing to limit it follows f(l1)=0, i.e. l1=x¯.

The inequality g1(l1)=x¯ implies l2=x¯.

Finally, l3=g2(l1)x¯f(g2(l1))0, and since l1g2(l1) and, at the same time, (4) implies l1g2(l1), we obtain l1=g2(l1)=l3.

Analogous results hold in the case when f is decreasing and convex, or increasing, resp. decreasing and concave (see [7]).

3. The Steffensen method

This method is obtained from (3) for g1(x)=x for all xI. For the sake of simplicity we shall denote in this section g2=g. So, the Steffensen method reads as

(7) xn+1=xnf(xn)[xn,g(xn);f],n=0,1,,x0I.

We observe that the hypotheses (b), (c) and (f) from the previous section are automatically satisfied for the function g1 we have considered here.

Concerning the functions f and g it remains here to make the following assumptions:

  • (a1)

    the function f is convex on I;

  • (b)1

    the function g is decreasing and continuous on I;

  • (c)1

    equations (1) and xg(x)=0 have each a unique solution x¯intI, which is the same.

We obtain the following consequences concerning the convergence of the method (7):

Corollary 3.1.

If the functions f and g obey (a1)(c1) and, moreover, f is increasing on I, there exists f(x¯) and the point x0 in (7) may be chosen such that f(x0)<0 and g(x0)I, then the sequences (xn)n0 and (g(xn))n0 verify the following properties:

  • j2.

    the sequence (xn)n0 is increasing and bounded;

  • jj2.

    the sequence (g(xn))n0 is decreasing and bounded;

  • jjj2.

    limxn=limg(xn)=x¯;

  • jv2.

    xnx¯g(xn),n=0,1;

  • v2.

    max{x¯xn,g(xn)x¯}g(xn)xn,n=0,1,

We shall assume in the following that the function f from equation (1) has the form f(x)=xg(x). In this case (7) becomes

(8) xn+1=xn(xng(xn))2g(g(xn))2g(xn)+xn,n=0,1,,x0I.

Concerning the convergence of these iterates we obtain from Corollary 3.1 the following result

Corollary 3.2.

If g is increasing and concave on I, and equation xg(x)=0 has a unique solution x¯int(I), there exists g(x¯) and the initial approximation is chosen such that x0<g(x0), with g(x0)I, then the sequences (xn)n0 and (g(xn))n0 generated by (8) verify the conclusions of Corollary 3.1.

Proof.

Since g is decreasing on I, it follows that for any x,yI we have [x,y;g]<0 and so 1[x,y;g]<0, i.e. [x,y;f]>0 for all x,yI, which implies that f is increasing. On the other hand, for all x,y,zI we have that [x,y,z;f]=[x,y,z;g], and since g is concave we obtain that f is convex. One can see that the hypotheses of Corollary 3.1 are satisfied. ∎

4. Applications

In this section we shall show that the functions g1,g2 (resp.g) from the auxiliary equations (2) (resp. xg(x)=0) may be determined in different ways, under convexity and monotonicity assumptions on the function f from (1), such that the essential hypotheses of Theorem 2.1, resp. Corollaries 3.1 and 3.2 are automatically satisfied.

We shall assume that f is increasing and convex on I, i.e. for all x,y,zI we have [x,y;f]>0. Let [α,β]int(I). Choose

g1(x)=xf(x)fl(β)and g2(x)=xf(x)fl(α)

(the existence of the lateral derivatives fl(β) and fr(α) is assumed by Proposition 1.4). Obviously, fl(β)>0 and fr(α)>0, since we have assumed that f is increasing on I. From the assumption of convexity on f it follows that f is continuous on [α,β], and hence g1 and g2 are both continuous on [α,β], therefore satisfying hypothesis (b). On the other hand, for all x,y[α,β] we have

[x,y;g1]=11fl(β)[x,y;f],

and since f is convex we get that [x,y;f]fs(β), i.e. [x,y;g1]0 (in other words, g1 is an increasing function on [α,β]).

A similar reasoning leads to the conclusion that g2 is a decreasing function on [α,β].

Resuming, one can see that hypotheses (c) and (d) are both satisfied. The function f is assumed to be increasing and so hypothesis (e) is verified. Hypothesis (f) is obviously satisfied from relation

0<1[x,y;f]fl(β),

and from the fact that

0<[x,y;f]f1(β)<1.

We choose now in (3) x0=α and we assume that g2(g1(α))<β, in which case the functions f,g1 and g2 satisfy in an obvious manner the hypotheses of Theorem 2.1.

Remarks.

1. From the above reasoning it follows that in order to obtain bilateral approximation sequences for the solution x¯ of (1), there suffice monotonicity and convexity assumptions of f, followed by the condition g2(g1(x0))I.

2. If we choose 0<λ1fr(α)<fl(β)λ2, then the functions

g1(x) =xf(x)λ2
g2(x) =xf(x)λ1,

obey conditions of Theorem 2.1.

3. If we choose the functions g1,g2 given by

g1(x) =x
g2(x) =xf(x)λ1=g(x),

then the hypotheses of Corollary 3.1 are fulfilled.

References

Received, 15 oct. 1999

North University of Baia Mare

Department of Mathematics and Computer Science

Victoriei 76, 4800 Baia Mare

Romania

Related Posts

On the chord method

Abstract Let \(X_{1},X_{2}\) be two Banach spaces, \(f:X_{1}\rightarrow X_{2}\) a nonlinear mapping and consider the chord method for solving the…