Extended convergence of two-step iterative methods for solving equations with applications

Ioannis K. Argyros and Santosh George
(Date: March 18, 2018; accepted: June 12, 2018; published online: December 18, 2024.)
Abstract.

The convergence of two-step iterative methods of third and fourth order of convergence are studied under weaker hypotheses than in earlier works using our new idea of the restricted convergence region. This way, we obtain a finer semilocal and local convergence analysis, and under the same or weaker hypotheses. Hence, we extend the applicability of these methods in cases not covered before. Numerical examples are used to compare our results favorably to earlier ones.

Key words and phrases:
Banach space, restricted convergence region, convergence of iterative method.
2005 Mathematics Subject Classification:
65G99, 65H10, 49M15, 65J15.
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505 USA, e-mail: iargyros@cameron.edu
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, India-575 025, e-mail: sgeorge@nitk.ac.in.

1. Introduction

Let B1,B2 stand for Banach spaces and ΩB1 be a nonempty, convex and open set. By LB(B1,B2) we denote the space of bounded linear operators from B1 to B2.

There is a plethora of problems in various disciplines that can be written using mathematical modeling like

(1) F(x)=0

where F:ΩB2 is differentiable in the sense of Fréchet. Therefore finding a solution x of equation (1) is of great importance and challenge [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]. One wishes x to be found in closed form but this is done only in special cases. This is why, we resort to iterative methods approximating x. Numerous studies have been published on the local as well as the semilocal convergence of iterative methods. Among these methods is the single step Newton’s method defined by

(2) z0Ω,zn+1=znF(zn1)F(zn)

for each n=0,1,2,, which is considered the most popular.

Iterative methods converge under certain hypotheses. However, their convergence region is small in general. Finding a more precise than Ω set D containing the iterates is very important, since the Lipschitz constants in D will be at least as tight as in Ω. This will in turn lead to a finer convergence analysis of these methods. We pursue this goal in the present study by studying the two-step fourth convergence order Newton’s method defined as

(3) x0Ω,yn = xnF(xn)1F(xn)
xn+1 = ynF(yn)1F(yn)

as well as the two-step third order Traub method [21]

(4) x0¯Ω,yn¯ = xn¯F(xn¯)1F(xn¯)
xn+1¯ = yn¯F(xn¯)1F(yn¯)

The local as well as the semilocal convergence of method (3) and (4) is carried out under the same set of hypotheses.

The layout of the rest of the article involves the semilocal and local convergence of these methods in Section 2 and Section 3, respectively. Numerical examples are given in Section 4.

2. Semilocal convergence analysis

We present first the semilocal convergence analysis for method (3). First, we need to show an auxiliary result on majorizing sequences for method (3). The proof is an extension of the corresponding one given by us in [9].

Let l>0,l0>0 and η>0 be given parameters. Define parameters γ,s0 and t1 by

(5) γ=2ll+l2+8l0l,s0=η,andt1=s0[1+l0s02(1l0s0)],forl0s01,

and the scalar sequence tn for each n=1,2,…by

t0=0,t1 = s0+ls022(1l0s0),
sn = tn+l(tnsn1)22(1l0tn),
(6) tn+1 = sn+l(sntn)22(1l0sn).
Lemma 1.

Let >0,l0>0 and η>0 be given parameters. Suppose that

(7) 0<max{l(s0t0)2(1l0s0),l(t1s0)2(1l0t1)}γ<1l0s0.

Then, the sequence tn is nondecreasing, bounded from above by t=η1γ, and converges to its unique least upper bound t satisfying ηtt. Moreover, the following items hold for each n=1,2,

(8) 0<tn+1snγ(sntn)γ2n+1η,
(9) 0<sntnγ(tnsn1)γ2nη

and

(10) tnsntn+1.
Proof.

Estimations (8)–(10) hold true, if

(11) 0<l(smtm)2(1l0sm)γ,
(12) 0<l(tm+1sm)2(1l0tm+1)γ

and

(13) 0<tmsmtm+1.

Estimations (11)–(13) hold true for m=0, by (5)–(7). Suppose (11)–(13) hold true for m=1,2,,n.

By (8) and (9), we get

(14) sm tm+γ2mηsm1+γ2m1η+γ2mη
η+γ2mη=1γ2m+11γη<η1γ=t

and

(15) tm+1 tm+γ2m+1ηtm+γ2mη+γ2m+1η
η++γ2m+1η=1γ2m+21γη<η1γ=t.

Then, (11) shall hold, if

(16) lγ2mη2(1l0(1γ2m+11γ)η)γ

or

(17) l2γ2mη+l0γ1γ2m+11γηγ

or

(18) l2γ2m1η+l0(1+γ++γ2m)η10.

We are motivated by (18) to define recurrent polynomials fm defined on the interval [0,1) by

(19) fm(t)=l2t2m1η+l0(1+t++t2m)η1,

which satisfies

(20) fm+1(t)=fm(t)+p(t)t2m1η,

where, polynomial p is given by

(21) p(t)=l0t3+(l2+l0)t21=(t+1)(l0t2+l2t1),

so p(γ)=0.

Notice, in particular from (20) that

(22) fm+1(γ)=fm(γ)=limmfm(γ)=f(γ).

Evidently by (21), (18) is true, if

(23) f(γ)0

But

(24) f(γ)=l0η1γ1

Then, by (19) and (24), we see that (23) is satisfied.

Similarly, to show (12), we must have

(25) lγ2m+1η2(1l01γ2m+21γη)γ

or

(26) l2γ2mη+l0(1+γ++γ2m+1)η10

leading to the definition of recurrent functions gm defined on the interval [0,1) by

gm(t)=l2t2mη+l0(1+t++t2m+1)η1,

which satisfies

gm+1(t)=gm(t)+p(t)t2mη,

so again

gm+1(γ)=gm(γ)==limmgm(γ)=g(γ)

Item (26) holds, if

(27) g(γ)0.

But

(28) g(γ)=l0η1γ1,

so (27) holds true by (7) and (28). Item (13) also holds true by (6),(11) and (12). Hence, the induction for (11)–(13) is completed, and items (8)–(10) hold for each n=1,2,. It follows from (13)-(15) that sequence tn is nondecreasing, bounded from above by t and as such it converges to t. ∎

Remark 2.

It is worth noticing that if

l0η1/2

then

l(t1s0)2(1l0t1)l(s0t0)2(1l0s0)

That is (7) holds, if

0<lη2(1l0η)γ<1l0η

or equivalently, if

(29) hA=l1η12,

where

(30) l1=18(l+4l0+l2+8l0l)

Hence, (29) and (30) can replace (7) in 1 and in what follows from now on. The sufficient convergence criterion (29) is similar to the corresponding one for Newton’s method given by us in [11], if l replaces l1. But ll1, where l1 is the Lipschitz constant on Ω. Hence, the sufficient convergence criteria for Newton’s method in [11] are also improved.

Let S(x,a) stand for the open ball in B1 with center xB1 and of radius a>0. By S¯(x,a), we denote the closure of S(x,a).

The semilocal convergence of method (3) uses the conditions (A):

  • (a1)

    F:ΩB1B2 is a continuously differentiable operator in the sense of, Fréchet, and there exists x0Ω such that F(x0)1LB(B2,B1) with F(x0)1F(x0)=η

  • (a2)

    There exists l0>0 such that for each xΩ

    F(x0)1(F(x)F(x0))l0xx0.

    Define S0=ΩS(x1,1l0η), where x1=x0F(x0)1F(x0), and l0η<1 by (29) and (30).

  • (a3)

    There exists l>0 such that for each x,yS0

    F(x0)1(F(y)F(x))lyx.
  • (a4)

    Hypotheses of 1 hold with (7) replaced by (29) and (30).

  • (a5)

    S¯(x0,t)Ω, where t is given in 1.

  • (a6)

    There exists t1t such that l0(t+t1)<2.

Set S1=ΩS¯(x0,t1). Next, we can show the semilocal convergence result for methos (3)

Theorem 3.

Assume that the conditions (A) hold. Then, xnS(y0,1l0η),n=1,2 and converges to some x which is the only solution of equation F(x)=0 in the set S1.

Proof.

We must prove using mathematical induction that

(31) xm+1ymtm+1sm

and

(32) ymxmsmtm

By (a1) and (29), we have

y0x0=F(x0)1F(x0)η1l0η,

so x0S(y0,1l0η), and (32) holds for η=0. By method (3) for η=0, (a1),(6) and (a2) we have in turn that

x1y0 = F(x0)1[F(y0)F(x0)F(x0)(y0x0)]
= 01F(x0)1(F(x0+τ(y0x0))F(x0))𝑑τ(y0x0)
l02y0x02
l02(s0t0)2
= t1s0<1l0η

so x1S¯(y0,1l0η).

Then, by (a2), we have

(33) F(x0)1(F(x1)F(x0)l0x1x0l0t1<1

so by (33) and the Banach lemma on invertible operators [18] F(x1)1LB(B2,B1), and

(34) F(x1)1F(x0)11l0x1x0.

In view of method (3) and (a3), we get

F(x0)1F(x1) =F(x0)1[F(x1)F(y0)F(y0)(x1y0)]
=01F(x0)1(F(y0+τ(x1y0)F(y0)))dτx1y0
l02x1y02l2(t1s0)2

so

y1x1 =[F(x1)1F(x0)][F(x0)1F(x1)]
[F(x1)1F(x0)][F(x0)1F(x1)]
l(t1s0)22(1l0t1)=s1t1,

and

y1y0y1x1+x1y0s1t1+t1s0=s1s0<1l0η,

so (32) holds for m=1 and y1S¯(y0,1l0η).

Using method (3) as above, we have

x2y1 F(y0)1F(x0)F(x0)1F(y1)
ly1x122(1l0y1x0)
l(s1t1)22(1l0s1)=t2s1,

and

x2y0x2y1+y1y0t2s1+s1s0=t2s0<1l0η,

so (31) holds for m=1 and x2S¯(y0,1l0η).

Then, replace x1,y1,x2 by xm,ym,xm+1 to complete the induction for (31) and (32). Moreover, we have

(35) xm+1xmxm+1ym+ymxmtm+1sm+smtm=tm+1tm,

and

(36) xmy0xmym1+ym1y0tmsm1+sm1s0tms0<1l0η.

It then follows from (35), (36) and 1 that sequence tm is complete in B1, so there exists xS¯(y0,1l0η) with limmxm=x.

Furthermore, by the second substep of method (3), we have

F(x0)1F(xm+1)l2xm+1yml2(tm+1+sm)2,

so F(x)=0, by the continuity of F and 1. The uniqueness of the solution part is given in [11]. ∎

Next, we study the semilocal convergence of method (4) in an analogous way.

Remark 4.

Let p1 be a cubic polynomial defined by

p1(t)=l0t3+l2t2+l2tl

We have p1(0)=l<0 and p1(1)=l0>0.

It follows by the intermediate value theorem that P1 has at least one root in (0,1).

But p1(t)=3l0t2+lt+l2>0, so p1 increasing, so p1(t)=0 has a unique root in (0,1). Denote by δ this root.

The following estimate is needed:

(37) 0<lη2(1l0η)lη(2+l2η)2(1l0η(1+l02))

Evidently, (37) holds, if

01+(l2l0)η+l02η2(l0+l)

Estimations (37) is true, if l2l0. Moreover, if l<2l0, then l02(l0+l)t2+(l2l0)t+1=0, since it has two negative roots by the Descarte’s rule of signs, so (33) holds in this case too.

We need an auxiliary result on majorizing sequences for method (4) similar to 1.

Define sequence t¯n for each n=1,2, by t¯0=0,t¯1=η(1+l0η)

s¯n=t¯n+l(t¯n+s¯n12t¯n1)(t¯ns¯n1)1l0tn
t¯n+1=s¯n+l(s¯nt¯n)2(1l0t¯n)(s¯nt¯n)
Lemma 5.

Let l0>0,l>0 and η>0 be positive parameters. Assume that

(38) 0<lη(2+l2η)2(1l0η(1+l02η))δ1l0η.

Then, the conclusions of 1 hold with sequence t¯n defined by (38) replacing sequence tn given by (6).

Proof.

As in 1, we must show

0<l(s¯mt¯m)2(1l0t¯m)δ,
(39) 0<l(t¯m+1+s¯m2t¯m)2(1l0t¯m+1)δ

and

0t¯ms¯ms¯m+1.

But

0<l(s¯mt¯m)2(1l0t¯m)l(t¯m+1+s¯m2t¯m)2(1l0t¯m+1)

and

0<11l0t¯111l0t¯m.

Notice that

0<l(s¯mt¯m)2(1l0t¯m)l(t¯m+1+s¯m2t¯m)2(1l0t¯m+1)

and

0<11l0t¯111l0t¯m

so it suffices to show only (39), or

l(1δ2m+21δ1δ2m1δ)η1l0(1++δ2m+1)ηδ

or

fm(t)=l2t2m1(t+2)η+l0(1++t2m+1)η10

But

fm+1(t)=fm(t)+p1(t)t2m1(t+1)η,

so

fm+1(δ)=fm(δ)=limmfm(δ)=f(δ)=lη1δ1

The rest follows as in 1 with t¯=limnt¯n and t¯=η1δ.

Replace xm,ym,t,t,t1,tn,sn by x¯m,y¯m,t¯,t¯,t¯1,t¯n,s¯n, hypotheses of 1, method (3) by 5, method (4) respectively. Call the resulting hypotheses (A)’. Then in an analogous to 3 way, we arrive at:

Theorem 6.

Suppose that the conditions (A)’ hold. Then, the conclusions of 3 hold but with method (4) replacing method (3).

Proof.

Notice that the only difference in the proof is that we use

y¯mx¯ml(t¯m+s¯m12s¯m1)1l0t¯m(t¯ms¯m1),
x¯m+1y¯m =(F(x¯m)1F(x0))(F(x0)1F(y¯m))
(F(x¯m)1F(x0))(F(x0)1F(y¯m))
l(s¯mt¯m)22(1l0t¯m)

instead of

xm+1yml(smtm)22(1l0sm)

and

ymxml(tmsm1)22(1l0tm),

respectively. ∎

3. Local convergence analysis

The local convergence analysis for both methods uses the hypotheses (H):

  • (h1)

    F:ΩB1B2 is differentiable in the sense of Fr’echet and there exist xΩ such that F(x)1LB(B2,B1) and F(x)=0.

  • (h2)

    There exists L0>0 such that for each xΩ

    F(x)1(F(x)F(x))L0xx.

    Set S2=ΩS(x,1L0).

  • (h3)

    There exists L>0 such that for each x,yS2

    F(x)1(F(y)F(x))Lyx.
  • (h4)

    S(x,ρ)Ω, where

    ρ={μA=22L0+L,if method (3) is usedR=44L0+(1+5)L,if method (4) is used
  • (h5)

    There exists r¯ρ such that L0r¯<1.

    Set S3=ΩS¯(x,r¯).

The proofs of the following two results are omitted, since they follow as the corresponding ones for single step Newton’s method (2) given in [9, 11].

Theorem 7.

Under the hypotheses (H) starting from x0S(x,μA)x sequence xn produced by method (3) converges to x which is the only solution of equation F(x)=0 in the set S3. Moreover, the following items hold:

(40) ynxLxnx22(1L0xnx)

and

(41) xn+1xLynx22(1L0ynx
Theorem 8.

Under the hypotheses (H) starting from x0S(x,R)x sequence x¯n produced by method (4) converges to x which is the only solution of equation F(x)=0 in the set S3. Moreover the following items hold:

(42) ynxLxnx22(1L0xnx)

and

(43) yn+1xL(2xnx+ynx)2(1L0xnx)ynx
Lemma 9.

The radius of convergence in [9, 11] for the single step Newton’s method was given by

(44) μ¯A=22L0+L1,

where L1 is the Lipschitz constant on Ω. Then, since s0Ω, we get

(45) LL1

so

(46) μ¯AμA

The error bounds are tighter too, since L1 is used in (40) and (41).

4. Numerical Examples

Example 10.

Let B1=B2=, Ω=S(x0,1α), x0=1 and αI=[0,12). Define function f on Ω by f(x)=x3α.

Then, using hypotheses (a1)-(a3), we get l0=3α,l=2(6+5α2α2)3(3α) and l1=2(2α). Method (2) For αI0=[0.371269,0.5] has solutions under our approach but no solutions according to Kantorovich, since hK=l1η>12 for each α[0,0.5]. Method (3) has no solution in [0,0.5].

Example 11.

Let B1=B2=C[0,1], where C[0,1] stands for the space of continuous function on [0,1]. We shall use the mimum norm. Let Ω0={xC[0,1]:xd}.

Define operator G on Ω0 by

G(x)(s)=x(s)g(s)b01K(s,t)x(t)3𝑑t,xC[0,1],s[0,1]

where gC[0,1] is a given function, ξ is a real constant and the kernel K is the Green’s function. In this case, for each xD, F(x) is a linear operator defined on D by the following expression: [F(x)(v)](s)=v(s)3ξ01K(s,t)x(t)2v(t)𝑑t,vC[0,1],s[0,1] If we choose x0(s)=f(s)=1, it follows IF(x0)3|ξ|8. Thus, if |ξ|<83, f(x0)1 is defined and

F(x0)1883|ξ|,F(x0)|ξ|8,η=F(x0)1F(x0)|ξ|83|ξ|.

Choosing ξ=1.00 and x=3, we have η=0.2,T=3.8,b=2.6,L1=2.28, and l=1.38154

Using this values we obtain that conditions (28)-(30) are not satisfied, since the Kantorovich condition hK=l1η12, gives hK=0.76>12. but condition (29) is satisfied since 0.485085<12 The convergence of the Newton’s method follows by 3.

Example 12.

Let B1=B2=3, Ω=S(0,1),x=(0,0,0)T and define G on Ω by

(47) G(x)=F(x1,x2,x3)=(ex11,e12x22+x2,x3)T.

For the points u=(u1,u2,u3)T, the Fréchet derivative is given by

G(u)=(eu1000(e1)u2+10001).

Then, G(x)=diag(1,1,1), we have L0=e1,L=e1e1,L1=e.

Then, we obtain that

ρ={μA=0.3827,if method (3) is usedR=0.3158,if method (4) is used.
Example 13.

Let B1=B2=C[0,1], the space of continuous functions defined on [0,1] and be equipped with the max norm. Let Ω=S¯(0,1). Define function G on Ω by

(48) G(φ)(x)=φ(x)501xθφ(θ)3𝑑θ.

We have that

G(φ(ξ))(x)=ξ(x)1501xθφ(θ)2ξ(θ)𝑑θ,for eachξΩ.

Then, we get that x=0, L0=7.5,L=15=L1. This way, we have that

ρ={μA=0.0667,if method (3) is used,R=0.0509,if method (4) is used.
Acknowledgement.

We thank the referee for his remarks in improving this paper.

References