Posts by Ion Păvăloiu

Abstract

Author

Ion Păvăloiu
(Tiberiu Popoviciu Institute of Numerical Analysis)

Keywords

nonlinear equations in R; one step iterative methods; multistep iterative methods; convergence order.

PDF

Scanned manuscript (in Romanian).

PDF-LaTeX version of the paper (soon).

Cite this paper as:

I. Păvăloiu, On the convergence of some one step and multistep iterative methods, 2010, manuscript.

About this paper

Journal
Publisher Name
Article on the journal website
Print ISSN
Online ISSN

References

Paper (preprint) in HTML form

ON THE CONVERGENCE ORDER OF THE MULTISTEP METHODS

Buletinul Ştiinţific al Universităţii din Baia Mare

Seria B, Matematică-Informatică, vol. XIII (1997), 107–116

ON THE CONVERGENCE ORDER
OF THE MULTISTEP METHODS

Ion Pǎvǎloiu

1. Introduction

In this paper we analyse some aspects concerning the convergence order of the iterative methods of interpolatory type for solving scalar equations. A unitary approach to these methods will enable us to analyse them and then to select among them those with the optimal convergence order.

2. Convergence order

Denote I=[a,b],a,b,a<b and consider the equation

(1) f(x)=0

where f:I. For the sake of the simplicity we shall suppose in the following that equation (1) has a unique solution x¯I. Let g:II be a function for which x¯ is the unique fixed point on I.

For the approximation of the solution of (1) there is generally considered, under certain conditions, a sequence (xp)p0 generated by:

(2) xs+1=g(xs),s=0,1,,x0I.

More generally, if G:IkI is a function of k variables for which the restriction to the diagonal of the set Ik coincides with g, namely

G(x,x,,x)=g(x),for all xI,

then there is considered the following iterative method:

(3) xs+k=G(xs,xs+1,,xs+k1),s=0,1,,x0,x1,,xk1I.

The convergence of the sequence (xp)p0 generated by (2) or (3) depends on certain properties of the functions f and g, respectively G. The amount of time needed to obtain a convenient approximation xp of x¯ depends both on the convergence order of the sequence (xp)p0 and on the amount of elementary operations that must be performed by the computer at each iteration step. In this paper we shall study only the first aspect of this problem, i.e. we shall deal with the convergence order.

We shall adopt as the convergence order a definition a little more generally then the one given by Ostrowski [3].

Consider an arbitrary sequence (xp)p0 satisfying, together with f and g, the following conditions:

  • a)

    xsI and g(xs)I for s=0,1,;

  • b)

    the sequences (xp)p0 and (g(xp))p0 converge to x¯, the solution of equation (1);

  • c)

    there exists m,m>0 such that 0<|[x,y;f]|m, for all x,yI, where we have denoted by [x,y;f] the first order divided difference of f at the points x and y;

  • d)

    f is differentiable at x¯.

Definition 2.1.

The sequence (xp)p0 has the convergence order ω, ω1, with respect to the function g, if there exists the limit:

(4) α=limpln|g(xp)x¯|ln|xpx¯|

and α=ω.

Remark 2.1.

If the elements of (xp)p0 are generated by the iterative method (2), then the convergence order defined above coincides with the one defined in [3]. ∎

For the determination of the convergence order of some classes of iterative methods we shall need in the following two lemmas:

Lemma 2.1.

If the sequence (xp)p0 and the functions f and g satisfy the properties a)–d) then the necessary and sufficient condition for the sequence (xp)p0 to have the convergence order ω,ω1 with respect to the function g is that the following limit to exist:

(5) limpln|f(g(xp))|ln|f(xp)|=β,

and β=ω.

Proof.

Supposing that one of the relations (4) or (5) holds and, taking into account the properties a)–d), we obtain:

α = limln|g(xp)x¯|ln|xpx¯|=limln|f(g(xp))|ln|[g(xp),x¯;f]|ln|f(xp)|ln|[xp,x¯;f]|
= limln|f(g(xp))|ln|f(xp)|1ln|[g(xp),x¯;f]|ln|f(g(xp))|1ln|[xp,x¯;f]|ln|f(xp)|=limln|f(g(xp))|ln|f(xp)|=β.

Lemma 2.2.

If (up)p0 is a sequence of real numbers that satisfies:

  • i.

    the sequence (up)p0 is convergent and limup=0;

  • ii.

    there exist the nonnegative numbers α1,α2,,αn+1 and a bounded sequence (cp)p0,cp>0,p=0,1, for whichinf{cp}>0 such that

    (6) us+n+1=csusα1us+1α2us+nαn+1,s=0,1,;
  • iii.

    the sequence (lnup+1lnup)p0 is convergent and limlnup+1lnup=ω.

Then ω is the positive root of the equation:

tn+1αn+1tnαntn1α2tα1=0.
Proof.

From (6) we obtain

limslnun+s+1lnun+s=limslncslnun+s+i=0nαi+1limslnus+ilnus+n.

Taking into account the equalities

limslncslnun+s=0 and limslnui+slnun+s=1ωni,i=0,n¯

we obviously have

ω=i=0nαi+11ωni,

i.e.,

ωn+1αn+1ωnαnωn1α2ωα1=0.

In the following we shall consider the equations:

(7) P(t)=tn+1αn+1tnαntn1α2tα1=0;
(8) Q(t)=tn+1α1tnα2tn1αntαn+1=0;
(9) R(t)=tn+1αi1tnαi2tn1αintαin+1=0,

where we shall suppose that α1,α2,,αn+1 satisfy

(10) αi0i=1,n+1¯,i=1n+1αi>1
(11) αn+1αnα2α1,

i2, i2,,in+1 being an arbitrary permutation of the numbers 1,2,,n+1.

Lemma 2.3.

If α1,α2,,αn+1 satisfy conditions (10) then any of the equations (9) has a unique solution ω>1. Moreover, if (11) holds and we denote by a,b,c the positive solutions of (7), (8) and (9) then

(12) 1<bca,

i.e. equation (7) has the greatest positive solution.

Proof.

Consider one of the (n+1)! equations of the form (9), denote by s the greatest natural number for which αis0, i.e. αis+1=αis+2==αin+1=0 and consider the function

φ(t)=R(t)tns+1.

We have φ(1)=1αi1αi2αis<0 and limtφ(t)=+, whence it follows that the equation R(t)=0 has a solution greater than 1. Since the function ψ(t)=R(t)tn+1 is increasing in the variable τ=1t, τ>0 it follows that this solution is unique.

In order to prove (12) it will suffice to show that R(b)0 and R(a)0. Indeed, since Q(b)=0, we have:

R(b)= R(b)Q(b)
= (a1ai1)bn+(a2ai2)bn1++(anain)b+an+1ain+1
= (b1)[(a1ai1)bn1+(a1+a2ai1ai2)bn2++
+(a1+a2++an1ai1ai2ain1)b+
+(a1+a2++anai1ai2ain)],

whence it follows that R(b)<0, since b>1 and the inequalities

a1+a2++asai1ai2ais0,s=1,n¯

hold.

The inequality R(a)0 is proved in a similar manner. ∎

3. Iterative methods of interpolatory type

It is known that most of the iterative methods are obtained by inverse interpolation of Lagrange, Taylor or Hermite type.

Let F=f(I) be the image of I by f. We shall suppose that f is derivable on I up to the order n+1, and f(x)0 for all xI. It then follows that f:IF is invertible, so there exists f1:FI. It is obvious that if x¯ is the solution of (1) then

(13) x¯=f1(0).

In order to determine an approximation x~ of x¯ it is sufficient to determine a function h which approximates f1 on a neighborhood V0 of y=0.

If

(14) f1(y)=h(y)+R(f1,y),yV0,

then we can consider that x~x¯ with the error

(15) |x¯x~|=|R(f1,0)|, where x~=h(0).

A simple and efficient method for constructing functions that approximate f1 on V0 is the inverse interpolation. The most general method of this type is the method that lead us to the Hermite inverse interpolatory polynomial.

In order to construct the Hermite inverse interpolatory polynomial in its most general form we must know both the values of f1 and the values of the derivatives of f1 at some precized points from V0.

Concerning the successive derivatives of f1 we shall rely on the following lemma:

Lemma 3.1.

[6]. If f:IF is derivable up to the order n+1 and f(x)0 for all xI, then there exists f1:FI and the following equality holds:

(16) (f1)(k)(y)=(2ki12)(1)k+i11i2!i3!ik![f(x)]2k1j=1k(f(j)(x)j!)ij,

where y=f(x) and the above sum extends on all nonnegative integer solutions of the system

{i2+2i3+3i4++(k1)ik=k1i1+i2++ik=k1.

Let xiI, i=1,n+1¯ be n+1 interpolation nodes and a1,a2,,an+11, n+1 natural numbers. Denote by m+1 their sum:

(17) a1+a2++an+1=m+1.

We also denote yi=f(xi),i=1,n+1¯ and so f1(yi)=xi.

For the construction of the Hermite inverse interpolatory polynomial we consider as interpolation nodes the numbers yiF, i=1,n+1¯, and we need a polynomial

H(y1,a1;y2,a2;;yn+1,an+1;f1y)=H(y)

which satisfies

(18) H(j)(yi)=(f1)(j)(yi),j=0,ai1¯,i=1,n+1¯.

It is well known that such a polynomial has the following form

H(y1,a1;y2,a2;;yn+1,an+1;f1y)=
= i=1n+1j=0ai1k=0aij1(f1)(j)(yi)1k!j![(yyi)aiω(y)]y=yi(k)ω(y)(yyi)aijk,

where

(20) ω(y)=i=1n+1(yyi)ai.

The following equality holds:

(21) f1(y)=H(y1,a1;y2,a2;;yn+1,an+1;f1y)+R(f1,y)

where

(22) R(f1,y)=(f1)(m+1)(θ)(m+1)!ω(y),

θ being a point belonging to the smallest interval determined by the points y,y1, y2,,yn+1.

From (3), for a1=a2==an+1=1, there is obtained the Lagrange inverse interpolatory polynomial, i.e.

L(y1,y2,,yn+1;f1y)=i=1n+1xiω(y)(yyi)ω(yi),

and for n=0, again (3) gives the Taylor inverse interpolatory polynomial:

T(y1,f1y)=j=0m1j!(f1)(j)(y1)(yy1)j.

Let xs,xs+1,,xs+nI be n+1 approximations for the solution x¯ of equation (1). Then another approximation xs+n+1 can be obtained by

(23) xs+n+1=H(ys,a1;ys+1,a2;;ys+n,an+1;f10),s=1,2,

where the function H is given by (3) and the polynomial ωs is given by

ωs(y)=i=sn+s(yyi)ai,s=1,2,

It can be easily seen that, taking into account (21),

(24) |f(xn+s+1)|=|f(βs)||(f1)m+1(θs)|(m+1)!i=0n|f(xs+i)|ai+1,s=1,2,,

if the sequence (xp)p0 is generated by (23). For all s1, the numbers θs belong to the smallest interval determined by 0,ys,ys+1,,ys+n and βs are numbers belonging to the open intervals determined by x¯ and xs+n+1.

Supposing that

cs=|f(βs)|(f1)(m+1)(θs)(m+1)!,s=1,2,

verify the hypotheses of Lemma 2.2 and, moreover, limsf(xs)=0, then, taking into account Lemma 2.1, it follows that the convergence order of method (23) is given by the positive solution of equation

(25) tn+1an+1tnantn1a2ta1=0.

If a1=a2==an+1=q, then the corresponding iterative method from (23) has the convergence order ωn+1(q) given by the positive solution of the equation:

tn+1qtnqtn1qtq=0.

In [5] there is shown that the number ωn(q) satisfies

  • a)

    ωn(q)<ωn+1(q),n=1,2,;

  • b)

    max{q,n+1n+2(q+1)}<ωn+1(q)<q+1;

  • c)

    limωn(q)=q+1.

For q=1, the properties a)–c) can be found in [3].

For n=0, the corresponding iterative method has the form

xs+1= xs(f1)(ys)1!f(xs)+12!(f1)′′(ys)f2(xs)+
+(1)m1m!(f1)(m)(ys)fm(xs),s=1,2,,

i.e., the Chebyshev method. It can be easily seen that the convergence order of this method is m+1.

We propose ourselves to find bounds for the convergence order ω of (23), i.e for the solutions of (25). First we shall prove the following lemma.

Lemma 3.2.

The positive root ω of equation (23) satisfies

(26) (m+1)αω1+max1in{ai},

where

α=m+1(n+1)(m+1)i=1n+1(i1)ai.
Proof.

Let β=(m+1)α and Pn+1(t)=tn+1an+1tna2ta1. For the first part of inequality (26) it will suffice to show that Pn+1(β)0. Using the inequality between the geometric and the arithmetic mean, i.e.

i=1n+1piαii=1n+1pi(i=1n+1αipi)1i=1n+1pi,αi>0,pi0,i=1,n+1¯,i=1n+1pi>0,

we obtain

Pn+1(β) =βn+1i=1n+1aiβi1=βn+1i=1n+1aiαi1i=1n+1aii=1n+1ai
βn+1(i=1n+1ai)(i=1n+1β(i1)ai)1i=1n+1ai
=βn+1(m+1)(i=1n+1β(i1)ai)1m+1=0,

for β=(m+1)α.

For the second inequality in (26) it easily comes that Pn+1(a)0 for a=1+max1in+1{ai}. ∎

4. Interpolatory methods having optimal convergence order

To each permutation of the set {1,2,,n+1} there corresponds an iterative method of the form

(27) xn+2= H(yi1,ai1;yi2,ai2;;yin+1,ain+1;f10)
xn+s+2= H(yi1+s,ai1;yi2+s,ai2;;yin+1+s,ain+1;f10),s=1,2,

There are (n+1)! iterative methods. Lemma 2.3 offers the possibility to establish which method of the form (27) has the optimal convergence order.

Theorem 4.1.

Among the (n+1)! iterative methods of the form (27) the method with the highest convergence order is the one for which

ai1ai2ain+1.

The proof of the above theorem is obtained as a directly consequence of Lemma 2.3.

In the following we shall apply the results of Theorem 4.1 to a particular case. For n=1, by (23) we obtain two iterative methods:

(28) x3 =H(y1,a1;y2,a2;f1 0),x1,x2I,y1=f(x1),y2=f(x2)
xn+1 =H(yn1,a1;yn,a2;f1 0),n=3,4,

and

(29) x3 =H(y1,a2;y2,a1;f10),x1,x2I,y1=f(x1),y2=f(x2).
xn+1 =H(yn1,a2;yn,a1;f1 0),n=3,4,

The convergence order ω1 of method (28) is given by the positive solution of

t2α2tα1=0

and for (29) the convergence order is the solution of

t2α1tα2=0.

It is easily seen that if α2α1 then ω2ω1, and so the iterative method (28) has a greater convergence order than the one of (29).

References

  • [1] R. Brent, S. Winograd, F. Wolfe, Optimal iterative processes for root-finding., Numer. Math. 20 (1973), 327–341.
  • [2] H.T. Kung, J.F. Traub, Optimal order and efficiency for iterations with two evaluations, Numer. Anal., Vol. 13, 1, (1976), 84–99.
  • [3] A.M. Ostrowski, Solution of Equations and Systems of Equations, Academic Press, New York and London, (1966).
  • [4] I. Pǎvǎloiu, margin: available soon,
    refresh and click here
    Optimal problems concerning interpolation methods of solution of equations, Publ. Inst. Math., 52 (66) (1992), 113–126.
  • [5] J.F. Traub , Iterative Methods for Solution of Equations, Prentice-Hall Inc., Englewood Cliffs, New Jersey, (1964).
  • [6] B.A. Turowicz, Sur les derivées d’ordre superieur d’une fonction inverse, Ann. Polon. Math. 8 (1960), 265–269.

Romanian Academy

”T. Popoviciu” Institute of Numerical Analysis

P.O. Box 68,

3400, Cluj-Napoca

Romania

Related Posts

On an Aitken-Newton type method

Abstract We study the solving of nonlinear equations by an iterative method of Aitken type, which has the interpolation nodes…

On a Newton-Steffensen type method

Abstract In this paper we study the convergence of a Newton-Steffensen type method for solving nonlinear equations in R, introduced by…