Abstract

In this paper we study a third order Steffensen type method obtained by controlling the interpolation nodes in the Hermite inverse interpolation polynomial of degree 2. We study the convergence of the iterative method and we provide new convergence conditions which lead to bilateral approximations for the solution; it is known that the bilateral approximations have the advantage of offering a posteriori bounds of the errors. The numerical examples confirm the advantage of considering these error bounds.

Authors

Ion Păvăloiu
(Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy)

Emil Cătinaş
(Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy)

Keywords

Nonlinear equations; Steffensen type methods; inverse interpolatory polynomials; divided differences.
Cite this paper as:

I. Păvăloiu, E. Cătinaş, On a Steffensen-Hermite method of order three, Appl. Math. Comput., 215 (2009) 7, pp. 2663-2672.

PDF

PDF-LaTeX version of the paper (soon).

About this paper

Print ISSN

0096-3003

Online ISSN
MR

0096-3003

Online ISSN

Google Scholar citations

[1] S. Amat, J. Blenda, S. Busquier, A Steffensen’s type method with modified functions, Rev. Math. Univ. Parma 7 (2007) 125–133.

[2] S. Amat, S. Busquier, A two step Steffensen’s method under modified convergence conditions, J. Math. Anal. Appl. 324 (2006) 1084–1092.

[3] D.K.R. Babajee, M.Z. Dauhoo, An analysis of the properties of the variant of Newton’s method with third order convergence, Appl. Math. Comput. 183 (2006) 659–684.

[4] E. Catinas, The inexact, inexact perturbed and quasi Newton methods are equivalent models, Math. Comput. 74 (2005) 291–301.

[5] E. Catinas, Methods of Newton and Newton–Krylov type, Ed. Risoprint Cluj-Napoca, Romania, 2007.

[6] M. Grau, An improvement to the computing of nonlinear equation solution, Numer. Algor. 34 (2003) 1–12.

[7] Jisheng Kou, Yitian Li, Some variants of Chebyshev–Halley method with fifth-order convergence, Appl. Math. Comput. 189 (2007) 49–54.

[8] Jisheng Kou, Yitian Li, Modified Chebyshev–Halley methods with sixth-order convergence, Appl. Math. Comput. 188 (2007) 681–685.

[9] Jisheng Kou, Yitian Li, Xiuhua Wang, Some modifications of Newton’s method with fifth-order convergence, J. Comput. Appl. Math. 209 (2007) 146– 152.

[10] Nour Mahammand Aslom, Khalida Inayat, Fifth-order iterative method for solving nonlinear equations, Appl. Math. Comput. 188 (2007) 406–410.

[11] R.J. Sharma, A composite third order Newton–Steffensen method for solving nonlinear equations, Appl. Math. Comput. 169 (2005) 242–246.

[12] M. Frontini, Hermite interpolation and new iterative method for the computation of the roots of nonlinear equations, Calcolo. 40 (2003) 100–119.

[13] I. Pavaloiu, On a Steffensen–Hermite type method for approximating the solutions of nonlinear equations, Rev. Anal. Numér. Théor. Approx. 35 (2006) 87–94.

[14] I. Pavaloiu, Approximation of the root of equations by Aitken–Steffensen-type monotonic sequences, Calcolo. 32 (1995) 69–82.

[15] I. Pavaloiu, in: Dacia (Ed.), Solution of Equations by Interpolation, Cluj-Napoca, 1981 (in Romanian).

[16] I. Pavaloiu, E. Catinas, On a Steffensen type method, IEEE Computer Society, Proceedings of SYNASC 2007, 9th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, Timisoara, Romania, September 26–29, 2007, pp. 369–375.

[17] I. Pavaloiu, N. Pop, Interpolation and Applications, Ed. Risoprint Cluj-Napoca, 2005 (in Romanian).

[18] A. Ostrowski, Solution of Equations in Euclidian and Banach spaces, Academic Press, New York and London, 1973.

[19] A.B. Turowicz, Sur la dérivée d’ordre superieur d’une fonction inverse, Ann. Polon. Math. 8 (1960) 265–269.

Paper (preprint) in HTML form

On a Steffensen-Hermite method of order three

On a Steffensen-Hermite method of order three

I. Păvăloiu111”T. Popoviciu” Institute of Numerical Analysis, P.O. Box 68-1, Cluj-Napoca, Romania, e-mail: {pavaloiu,ecatinas}@ictp.acad.ro and E. Cătinaş222Corresponding author.
Abstract

In this paper we study a third order Steffensen type method obtained by controlling the interpolation nodes in the Hermite inverse interpolation polynomial of degree 2. We study the convergence of the iterative method and we provide new convergence conditions which lead to bilateral approximations for the solution; it is known that the bilateral approximations have the advantage of offering a posteriori bounds of the errors. The numerical examples confirm the advantage of considering these error bounds.

MSC 2000: 65H05.

Keywords: nonlinear equations, Steffensen type methods, inverse interpolatory polynomials, divided differences.

1 Introduction

The iterative methods play a crucial role in approximating the solutions of nonlinear equations. The methods with superlinear convergence offer good approximations with a reduced number of steps. In a series of papers ([1][11]) the authors obtain different methods or modifications of some known methods, in order to achieve iterative methods with higher convergence orders.

The Steffensen, Aitken or Aitken-Steffensen methods lead to sequences having at least order 2 of convergence. A natural approach to generalize such methods can be obtained with the aid of inverse polynomial interpolation (Lagrange, Hermite, Taylor, etc), with controlled interpolation nodes [12][17]. One of the advantages of such methods is the fact that the interpolation nodes may be controlled such that the methods offer sequences with bilateral approximations (both from above and from below) of the solutions. This aspect offers the control of the error at each step [14], [16].

In this paper we shall extend a Steffensen type method using the Hermite inverse interpolatory polynomial of degree 2 with 2 nodes. In [13] we have shown that among all the Steffensen-Hermite methods with two nodes of arbitrary orders, the optimal efficiency index is attained in the case when one node is simple and the other one is double (see [18] for definitions of efficiency index); we have also shown there that the convergence order of this method is 3. Here we provide new convergence conditions, which offer bilateral approximations of the solution; these are very useful for controlling the error at each iteration step. In Section 2, we shall study the convergence of this method, and in Section 3 we shall indicate a method of constructing the auxiliary functions used for controlling the interpolations nodes. Some numerical examples will be shown in Section 4.

Let c,d, c<d, f:[c,d], g:[c,d][c,d] and consider the following equivalent equations

(1) f(x) =0
(2) g(x) =x.

As usually, the first order divided difference of f at a,b[c,d] will be denoted by [a,b;f]; if a is double, then [a,a;f]=f(a). For the second order divided differences we have

[a,b,e;f]=[b,e;f][a,b;f]ea,a,b,e[c,d]

and if a is double, then

[a,a,b;f]=[a,b;f]f(a)ba.

Let F=f([c,d]) and assume the following conditions hold:

A.

fC3([c,d]) and f(x)0 x[c,d].

By A it follows that f:[c,d]F is invertible so there exists f1:F[c,d].

Let ai[c,d], i=1,2 and bi=f(ai), i=1,2, i.e. ai=f1(bi), and denote a1=(f1(b1))=1f(a1). Consider now the inverse interpolatory Hermite polynomial having b1 as double node and b2 as simple node, i.e. the second degree polynomial H determined such that

(3) H(b1) =a1
H(b1) =a1
H(b2) =a2.

Using the divided differences on multiple nodes, the resulted Hermite polynomial may be expressed in one of the following equivalent ways [17]:

(4) H(y)=a1+[b1,b2;f1](yb1)+[b1,b2,b1;f1](yb1)(yb2)
(5) H(y)=a1+[b1,b1;f1](yb1)+[b1,b1,b2;f1](yb1)2

or

(6) H(y)=a2+[b2,b1;f1](yb2)+[b2,b1,b1;f1](yb2)(yb1).

The remainder is given by

(7) f1(y)H(y)=[y,b1,b1,b2;f1](yb1)2(yb2),yF.

It can be easily seen that the representations given by (4), (5) and (6) verify condition (3).

B.

Assume that equation (1) has a solution x¯[c,d].

By A it follows that the solution x¯ is unique in [c,d].

One has x¯=f1(0), whence, by (4)–(7), one obtains the following representations for x¯:

(8) x¯=a1[b1,b2;f1]b1+[b1,b2,b1;f1]b1b2r,
(9) x¯=a1[b1,b1;f1]b1+[b1,b1,b2;f1]b12r;

or

(10) x¯=a2[b2,b1;f1]b2+[b2,b1,b1;f1]b2b1r

where

(11) r=[0,b1,b1,b2;f1]b12b2.

If in (8), (9) or (10) we neglect the remainder r, one may obtain an approximation for x¯, denoted by a3:

(12) a3=a1[b1,b2;f1]b1+[b1,b2,b1;f1]b1b2,

or

(13) a3=a1[b1,b1;f1]b1+[b1,b1,b2;f1]b12,

or

(14) a3=a2[b1,b2;f1]b2+[b1,b1,b2;f1]b1b2.

It can be easily seen that

(15) [b1,b1;f1]=1f(a1),
(16) [b1,b2;f1]=1[a1,a2;f],
(17) [b1,b1,b2;f1]=[a1,a1,a2;f][a1,a2;f]2f(a1).

For the divided difference [0,b1,b1,b2;f1] we take into account hypothesis A, i.e. f1C3(F) and apply the mean value formula. There exists ηintE0 such that

(18) [0,b1,b1,b2;f1]=(f1(η))′′′3!,

where E0 is the smallest interval containing b1,b2 and 0.

Sincef is invertible it follows that there exists ξinte0 such that η=f(ξ), where e0 is the smallest interval containing a1,a2 and x¯.

It can be easily seen that [15], [17], [19]

(19) (f1(y))′′′=3(f′′(x))2f(x)f′′′(x)(f(x))5.

Relations (15) and (16) lead to the following representation for the third order divided difference

(20) [0,b1,b1,b2;f1]=3(f′′(ξ))2f(ξ)f′′′(ξ)6(f(ξ))5.

Let xn[c,d] be an approximation of the solution x¯. If we set in (12)–(14) a1=xn, a2=g(xn) and we take into account relations (15)–(17) then we obtain a new approximation xn+1 for x¯ [16], [13]:

(21) xn+1=xnf(xn)[xn,g(xn);f][xn,xn,g(xn);f]f(xn)f(g(xn))[xn,g(xn);f]2f(xn),n=0,1
(22) xn+1=xnf(xn)f(xn)[xn,xn,g(xn);f]f2(xn)[xn,g(xn);f]2f(xn),n=0,1
(23) xn+1=g(xn)f(g(xn))[xn,g(xn);f][xn,xn,g(xn);f]f(xn)f(g(xn))[xn,g(xn);f]2f(xn),n=0,1.

It can be easily seen that (21)–(23) yield in fact a same approximation xn+1.

Analogously, if in (12)–(14) we set a1=g(xn), a2=xn we obtain the approximation xn+1 in one of the following (equivalent) forms

(24) xn+1=g(xn)f(g(xn))[xn,g(xn);f][xn,g(xn),g(xn);f]f(xn)f(g(xn))[xn,g(xn);f]2f(g(xn)),n=0,1,
(25) xn+1=g(xn)f(g(xn))f(g(xn))[xn,g(xn),g(xn);f]f2(g(xn))[xn,g(xn);f]2f(g(xn)),n=0,1,
(26) xn+1=xnf(xn)[xn,g(xn);f][xn,g(xn),g(xn);f]f(xn)f(g(xn))[xn,g(xn);f]2f(g(xn)),n=0,1,

Obviously, (24)–(26) represent the same approximation of x¯.

The approximation xn+1 given by (21)–(23) obeys

(27) x¯xn+1=[0,f(xn),f(xn),f(g(xn));f1]f2(xn)f(g(xn)).

Analogously, for xn+1 given by (24)–(26) one obtains:

(28) x¯xn+1=[0,f(xn),f(g(xn)),f(g(xn));f1]f(xn)f2(g(xn)).

In this paper we shall study both the convergence of the sequence (xn)n0 given by any of relations (21)–(23) and of the sequence given by any of relations (24)–(26).

Under some reasonable conditions on f regarding monotonicity and convexity on [c,d], we shall show that de above methods yield monotone sequences. Moreover, these iterations provide bilateral sequences, approximating the solution both from above and from below, which lead so a more precise control of the error [13][15].

2 Study of convergence

For the study of method given by (21)–(23), besides A and B we consider the following hypotheses:

1)

equations (1) and (2) are equivalent;

2)

function g is continuous and decreasing on [c,d];

3)

function Ef(x)=3(f′′(x))2f(x)f′′′(x)0, x[c,d].

In the following we shall study the 4 situations when f does not change the monotony and convexity on [c,d].

Theorem 1

If functions f, g and x0[c,d] obey A, B, 1)–3) and, moreover,

i1.

f(x)>0, x[c,d];

ii1.

f′′(x)0, x[c,d];

iii1.

g(x0)[c,d],

then the following properties hold:

j1.

if f(x0)<0 then sequence (xn)n0 given by (21)–(23) is increasing and bounded, while the sequence (g(xn))n0 is decreasing and bounded;

jj1.

if f(x0)>0 then sequence (xn)n0 is decreasing and bounded, while (g(xn))n0 is increasing and bounded;

jjj1.

limxn=limg(xn)=x¯;

jv1.

|x¯xn+1||g(xn+1)xn+1|, n=0,1,

Proof. Let xn[c,d] be an approximation for x¯ such that f(xn)<0 and g(xn)[c,d]. By 𝐢1 it follows that xn<x¯, while by 2) g(xn)>x¯, i.e., f(g(xn))>0. From 𝐢1, 𝐢𝐢1, and (21) it can be easily seen that xn+1>xn and g(xn+1)<g(xn). By (18)–(20), (27) it follows that exists ξn]xn,g(xn)[ such that

(29) x¯xn+1=Ef(ξn)6(f(ξn))5f2(xn)f(g(xn)).

The above equality, together with 3) and 𝐢1 imply xn+1<x¯, which in turn attracts g(xn+1)>x¯. The property 𝐣1 is proved.

Let xn[c,d] such that f(xn)>0 and g(xn)[c,d]. Obviously, xn>x¯ and g(xn)<x¯, i.e. f(g(xn))<0. By 𝐢1, 𝐢𝐢1 and (22) it follows xn+1<xn. From (29) we get x¯<xn+1, i.e. g(xn+1)<x¯. These prove 𝐣𝐣1.

For proving 𝐣𝐣𝐣1, let limnxn=. Passing to limit n in (21) and taking into account the continuity of f and g leads to f()=0. Since the solution x¯ is unique on [c,d], it follows =x¯. Relation limng(xn)=x¯ is obvious, as well as property 𝐢𝐯1.  

Theorem 2

If x0[c,d] and functions f, g verify A, B, 1)–3) and, moreover,

i2.

f(x)<0, x[c,d];

ii2.

f′′(x)0, x[c,d];

iii2.

g(x0)[c,d],

then

j2.

if f(x0)>0, then the sequence (xn)n0, generated by any of the relations (21)–(23), is increasing and bounded, while sequence (g(xn))n0 is decreasing and bounded;

jj2.

If f(x0)<0, then the sequence (xn)n0 generated by any of the relations (21)–(23) is decreasing and bounded, while sequence (g(xn))n0 is increasing and bounded;

jjj2.

properties 𝐣𝐣𝐣1 and 𝐣𝐯1 of Theorem 1 hold true.

Proof. Let xn[c,d] such that f(xn)>0 and g(xn)[c,d]. By 𝐢2 it follows xn<x¯ i.e. g(xn)>x¯ and f(g(xn))<0. If we take into account 𝐢2, 𝐢𝐢2 and (22), we get xn+1>xn. Hypotheses 3) and 𝐢1, together with (29) ensure relation xn+1<x¯, i.e. g(xn+1)>x¯. Since xn+1>xn it follows g(xn+1)<g(xn). Property 𝐣2 is proved.

Let xn[c,d] such that f(xn)<0 and g(xn)[c,d]. Obviously, from 𝐢2 it follows xn>x¯ and from 2) it follows g(xn)<x¯, i.e. f(g(xn))>0. By 𝐢2, 𝐢𝐢2 and (21) we get xn+1<xn. By (29), taking into account 3) and 𝐢2 it follows xn+1>x¯, i.e. g(xn+1)<x¯. Property 𝐣𝐣2 is proved.

Property 𝐣𝐣𝐣2 is obvious.  

In the following, instead of equation (1) we shall consider equation

(30) h(x)=0

where h:[c,d], h(x)=f(x).

We notice that if function Ef(x) verifies 3), then function Eh(x)=3(h′′(x))2h(x)h′′′(x)=Ef(x) verifies hypothesis 3).

Taking into account the above relations, the following assertions are consequences of Theorems 1 respectively 2:

Corollary 3

If x0[c,d] and functions f, g verify hypotheses A, B, 1)–3) and, moreover,

c1.

f(x)<0, x[c,d];

cc1.

f′′(x)0, x[c,d];

ccc1.

g(x0)[c,d],

then the following properties hold:

k1.

If f(x0)>0, then sequence (xn)n0 generated by (21)–(23) is increasing and bounded, while sequence (g(xn))n0 is decreasing and bounded;

kk1.

if f(x0)<0, then sequence (xn)n0 generated by (21)–(23) is decreasing and bounded, while sequence (g(xn)n0 is increasing and bounded;

kkk1.

The properties 𝐣𝐣𝐣1 and 𝐣𝐯1 of Theorem 1 hold true.

Proof. For the proof of this theorem we notice that if f verifies hypotheses of this Corollary, then function h verifies the assumptions of Theorem 1, so properties 𝐤1𝐤𝐤𝐤1 are obvious.  

Corollary 4

If x0[c,d] and functions f, g satisfy assumptions A, B, 1)–3) and, moreover,

c2.

f(x)>0, x[c,d];

cc2.

f′′(x)0, x[c,d];

ccc2.

g(x0)[c,d],

then the following we true:

k2.

if f(x0)<0 then sequence (xn)n0 generated by (21)–(23) is increasing and bounded while sequence (g(xn))n0 is decreasing and bounded;

kk2.

if f(x0)>0 then sequence (xn)n0 generated by (21)–(23) is decreasing and bounded, while sequence g(xn)n0 is increasing and bounded;

kkk2.

The properties 𝐣𝐣𝐣1 and 𝐣𝐯1 of Theorem 1 hold true.

Proof. The proof is analogous to the proof of Corollary 3.  

In the following we shall study the convergence of the sequence (xn)n0 generated by (24)–(26). Instead of hypothesis 3) we shall assume further hypothesis

3)

Ef(x)0, x[c,d].

Theorem 5

If x0[c,d] and functions f, g verify assumptions A, B, 1), 2), 3) and moreover

i5.

f(x)>0, x[c,d];

ii5.

f′′(x)0, x[c,d];

iii5.

g(x0)[c,d],

then the following are true:

j5.

if f(x0)<0 then sequence (xn)n0 generated by any of (24)–(26) is increasing and bounded while sequence (g(xn)n0 is decreasing and bounded;

jj5.

properties 𝐣𝐣𝐣1 and 𝐣𝐯1 of Theorem 1 hold true.

Proof. Let xn[c,d] such that f(xn)<0 and g(xn)[c,d]. Obviously, by 𝐢5 it follows xn<x¯ and by 2) we get g(xn)>x¯, i.e. f(g(xn))>0. Properties 𝐢5, 𝐢𝐢5 and (26) imply xn+1>xn. Properties (18)–(20), (28) attract the existence of ξn]xn,g(xn)[ such that

(31) x¯xn+1=Ef(ξn)6(f(ξn))5f(xn)f2(g(xn)).

This equality, together with 3), 𝐢5 and inequality f(xn)<0 lead to x¯>xn+1 and g(xn+1)>x¯. From xn+1>xn and 2) we get g(xn+1)<g(xn). Property 𝐣5 is proved. Property 𝐣𝐣5 is obvious if we take into account the proof of Theorem 1.  

Theorem 6

If x0[c,d] and function f, g obey A, B, 1), 2), 3) and, moreover,

i6.

f(x)<0, x[c,d];

ii6.

f′′(x)0, x[c,d];

iii6.

g(x0)[c,d],

then the following are true:

j6.

if f(x0)<0 then sequence (xn)n0 generated by (24)–(26) is decreasing and bounded, while sequence (g(xn))n0 is increasing and bounded;

jj6.

Properties 𝐣𝐣𝐣1 and 𝐣𝐯1 of Theorem 1 hold true.

Proof. The proof is analogous to the proof of Theorem 5.  

If instead of equation (1) we consider equation (30) then the following consequences of Theorem 5 and respectively 6 are true:

Corollary 7

If x0[c,d] and functions f, g verify A, B, 1), 2), 3) and

c7.

f(x)<0, x[c,d];

cc7.

f′′(x)0, x[c,d];

ccc7.

g(x0)[c,d],

then

k7.

if f(x0)>0 then sequence (xn)n0 generated by (24)–(26) is increasing and bounded, while (g(x0))n0 is decreasing and bounded;

kk7.

the properties 𝐣𝐣𝐣1 and 𝐣𝐯1 of Theorem 1 are true.

Corollary 8

If x0[c,d] and functions f, g verify A, B, 1), 2), 3) and

c8.

f(x)>0, x[c,d];

cc8.

f′′(x)0, x[c,d];

ccc8.

g(x0)[c,d],

then

k8.

If f(x0)>0 then sequence (xn)n0 generated by (24)–(26) is decreasing and bounded while sequence (g(xn))n0 is increasing and bounded;

kk8.

Properties jjj1 and 𝐣𝐯1 of Theorem 1 are true.

The convergence order of the sequences studied above is given in the following results.

Theorem 9

Under the hypotheses of Theorem 1 if, moreover,

i9.

the function g is derivable at x¯, and the divided difference [x,x¯;g] is bounded on [c,d],

then the q-convergence order of the sequence (xn)n0 generated by any of the relations (21)–(23) is equal to 3.

Proof. Taking into account equalities f(x¯)=0, f(g(x¯))=0, relation (29) may be written in the form

x¯xn+1=Ef(ξn)6(f(ξn))(f(xn)f(x¯)xnx¯)2f(g(xn))f(g(x¯))g(xn)g(x¯)g(xng(x¯))xnx¯(xnx¯)3.

Now, by hypotheses of Theorem 1 there exist ξn,νn,μn]xn,g(xn)[ such that

x¯xn+1=Ef(ξn)6(f(ξn))5(f(μn))2f(g(νn))[xn,x¯;g](xnx¯)3.

By 𝐀, 𝐢1 and 𝐢9 it follows the existence of a constant C>0 such that

|x¯xn+1|C|x¯xn|3,

which shows the q-convergence order 3 of the sequence.  

Theorem 10

Under the hypotheses of Theorem 5 if, moreover, hypothesis 𝐢9 of Theorem 9 hold, then the convergence order of the sequence generated by any of relations (24)–(26) is 3.

Proof. The proof is obtained in a similar manner as in Theorem 9.  

3 Choosing the auxiliary function

We notice that in all the results of the previous section, the continuity and monotony of g are essential (hypothesis 2) requires that g is continuous and decreasing).

In the following we shall point out a simple modality of obtaining such a function in accordance with the monotony and convexity of f.

1.

If function f verifies: f(x)>0 and f′′(x)0, x[c,d], then it can be easily seen that g can be chosen in the following way:

g(x)=xf(x)f(a).

More generally, if λ1[0,f(a)] then we may take

g(x)=xf(x)λ1.

It can be easily seen that g(x)0, i.e., g is decreasing.

2.

When f(x)>0, f′′(x)0 x[c,d], we may take

g(x)=xf(x)f(b);

more generally, if λ2]0,f(b)[ then we may take

g(x)=xf(x)λ2,

and hypothesis 2) is fulfilled.

3.

If f(x)<0, f′′(x)0 x[c,d] one may take

g(x)=xf(x)f(b),

or, more generally, for λ3[f(b),0) one may take

g(x)=xf(x)λ3.
4.

Finally, if f(x)<0, f′′(x)0 x[c,d] then one may take

g(x)=xf(x)f(a),

or, more generally, if λ4[f(a),0) one may take

g(x)=xf(x)λ4.

4 Numerical examples

a)

Let f(x)=ex+10x 6. We notice that equation f(x)=0 has a solution x¯]0,1[, f(0)=5, f(1)=e+4. This solution is unique since f(x)=ex+10>0, x[0,1]. The second and third derivatives of f are f′′(x)=f′′′(x)=ex, while Ef(x) is given by Ef(x)=2ex(ex5), such that for x[0,1] we have Ef(x)<0. Hypotheses of Theorem 1 are considered. Let g be given by

g(x)=xf(x)f(0)

i.e.,

g(x)=ex+x+611

We take x0=0 and we have g(0)=511[0,1].

If we consider x0=1 then g(1)=7e11[0,1].

The numerical results in Tables 1 and 2 are in accordance with the properties of the sequences xi and g(xi) proved in Theorem 1.

b)

Consider the equation

f(x)=xex+6x+6=0

for x[1,0]. Since f(0)=6, f(1)=1e it follows that the above equation has a solution x¯[1,0]. The derivatives of f are

f(x) =ex(x+1)+6
f′′(x) =ex(x+2)
f′′′(x) =ex(x+3).

It can be easily seen that if x[1,0] then f(x)>0 and f′′(x)>0.

Function Ef has the form

Ef(x)=ex(x+3)(2x2+8x+9x+3ex6).

Elementary considerations on the function

h(x)=2x2+8x+9x+3ex6

lead to inequality h(x)<0, x[1,0], i.e. Ef(x)<0 for x[1,0].

We consider g given by

g(x)=xf(x)f(1)=xex+66.

We have

g(x)=(x+1)ex60,x[1,0].

For x0=1 one has g(1)=61e6[1,0], so the hypotheses of Theorem 1 are satisfied.

The approximations from Tables 3 and 4 are again in accordance with Theorem 1.

c.

We consider equation

f(x)=x2+x+ex2=0,x[0,1].

Since f(0)=1, f(1)=e it follows that the above equation has a solution in [0,1].

The derivatives of f are given by

f(x) =2x+1+ex>0,for x[0,1]
f′′(x) =2+ex>0,for x[0,1]
f′′′(x) =ex.

Function Ef(x) is given by

Ef(x)=12+(112x)ex+2e2x>0,for x[0,1].

We consider

g(x)=xf(x)f(0)=12(xx2ex+2)

with

g(x)=12(12xex)<0,x[0,1].

For x0=0 we have g(0)=12[0,1].

Hypotheses of Theorem 5 are satisfied.

The approximations of x¯ from Tables 5 and 6 are in accordance with the statement of Theorem 5.

The numerical examples show here that the use of |g(xn)xn| offer better error bounds than |f(xn)|.

Acknowledgments. We are grateful to an anonymous referee for careful reading of the manuscript and for valuable suggestions.

References

  • [1] S. Amat, J. Blenda and S. Busquier, A Steffensen’s type method with modified functions. Rev. Math. Univ. Parma. 7:125-133 (2007).
  • [2] S. Amat and S. Busquier, A two step Steffensen’s method under modified convergence conditions. J. Math. Anal. Appl. 324:1084-1092 (2006).
  • [3] D.K.R. Babajee and M.Z. Dauhoo, An analysis of the properties of the variant of Newton’s method with third order convergence. Appl. Math. Comput. 183:659-684 (2006).
  • [4] E. Cătinaş, The inexact, inexact perturbed and quasi Newton methods are equivalent models. Math. Comp., 74:291-301 (2005).
  • [5] E. Cătinaş, Methods of Newton and Newton-Krylov type. Ed. Risoprint, Cluj-Napoca, Romania, 2007.
  • [6] M. Grau, An improvement to the computing of nonlinear equation solution. Numer. Algorithms. 34:1-12 (2003).
  • [7] Jisheng Kou and Yitian, Li, Some variants of Chebyshev-Halley method with fifth-order convergence. Appl. Math. Comput. 189:49-54 (2007).
  • [8] Jisheng Kou and Yitian Li, Modified Chebyshev-Halley methods with sixth-order convergence. Appl. Math. Comput. 188:681-685 (2007).
  • [9] Jisheng Kou, Yitian Li and Xiuhua Wang, Some modifications of Newton’s method with fifth-order convergence. J. Comp. Appl. Math. 209:146-152 (2007).
  • [10] Nour Mahammand Aslom and Khalida Inayat, Fifth-order iterative method for solving nonlinear equations. Appl. Math. Comput. 188:406-410 (2007).
  • [11] R.J. Sharma, A composite third order Newton-Steffensen method for solving nonlinear equations. Appl. Math. Comput. 169:242-246 (2005).
  • [12] M. Frontini, Hermite interpolation and new iterative method for the computation of the roots of nonlinear equations. Calcolo. 40:100-119 (2003).
  • [13] I. Păvaloiu, On a Steffensen-Hermite type method for approximating the solutions of nonlinear equations, Rev. Anal. Numér. Théor. Approx. 35:87-94 (2006).
  • [14] I. Păvăloiu, Approximation of the root of equations by Aitken-Steffensen-type monotonic sequences. Calcolo. 32:69-82 (1995).
  • [15] I. Păvăloiu, Solution of Equations by Interpolation. Ed. Dacia, Cluj-Napoca, 1981 (in Romanian).
  • [16] I. Păvăloiu and E. Cătinas, On a Steffensen type method, IEEE Computer Society, Proceedings of SYNASC 2007, 9th International Symposium on Symbolic and Numeric Algoritms for Scientific Computing, Timişoara, Romania, September 26–29, 2007, pp. 369–375.
  • [17] I. Păvăloiu and N. Pop, Interpolation and Applications. Ed. Risoprint, Cluj-Napoca, 2005 (in Romanian).
  • [18] A. Ostrowski, Solution of Equations in Euclidian and Banach spaces. Academic Press, New York and London, 1973.
  • [19] A.B. Turowicz, Sur la dérivée d’ordre superieur d’une fonction inverse. Ann. Polon. Math. 8:265-269 (1960).
i xi f(xi) g(xi) g(xi)xi
0 0 -5 2.7e-5 4.5e-01
1 4.440664289515356e-1 -3.0e-04 4.4e-1 2.7e-05
2 4.440925265279589e-1 -8.8e-16 4.4e-1 1.1e-16
Table 1: Numerical results for f(x)=ex+10x6, x0=0.
i xi f(xi) g(xi) xig(xi)
0 1 6.7e+00 3.8e-1 6.1e-1
1 4.443161590489098e-1 2.5e-03 4.4e-1 2.3e-4
2 4.440925265279666e-1 8.7e-14 4.4e-1 8.0e-15
Table 2: Numerical results for f(x)=ex+10x6, x0=1.
i xi f(xi) g(xi) g(xi)xi
0 -1 -3.6e-1 -9.3e-1 6.1e-2
1 -9.388063596878438e-1 -5.2e-8 -9.3e-1 8.6e-9
2 -9.388063510535405e-1 0 -9.3e-1 0
Table 3: Numerical results for f(x)=xex+6x+6, x0=1.
i xi f(xi) g(xi) xig(xi)
0 0 6 -1 -1
1 -9.373133790648003e-1 8.9e-03 -9.3e-1 1.4e-03
2 -9.388063510532724e-1 1.6e-12 -9.3e-1 2.6e-13
3 -9.388063510535405e-1 0 -9.3e-1 0
Table 4: Numerical results for f(x)=xex+6x+6, x0=0.
i xi f(xi) g(xi) g(xi)xi
0 0 -1 5.0e-1 5.0e-01
1 3.812436839992096e-1 -9.3e-3 3.8e-1 4.6e-03
2 3.841231457070055e-1 -1.4e-8 3.8e-1 7.3e-09
3 3.841231502186257e-1 0 3.8e-1 5.5e-17
Table 5: Numerical results for f(x)=x2+x+ex2, x0=0.
i xi f(xi) g(xi) xig(xi)
0 1 2.7e+00 -3.5e-1 1.3e+00
1 8.171724311528673e-1 1.7e+00 -5.7e-2 8.7e-01
2 4.455499951929994e-1 2.0e-01 3.4e-01 1.0e-01
3 3.841760770231760e-1 1.7e-04 3.8e-1 8.5e-05
4 3.841231502186540e-1 9.1e-14 3.8e-1 4.5e-14
5 3.841231502186256e-1 -4.4e-16 3.8e-1 2.2e-16
Table 6: Numerical results for f(x)=x2+x+ex2, x0=1.
2009

Related Posts