Return to Article Details A simplified homotopy perturbation method for solving nonlinear ill-posed operator equations in Hilbert spaces

A simplified homotopy perturbation method
for nonlinear ill-posed operator equations
in Hilbert spaces

Sharad Kumar Dixit
(Date: February 20, 2025; accepted: June 23, 2025; published online: June 30, 2025.)
Abstract.

One popular regularization technique for handling both linear and nonlinear ill-posed problems is Homotopy perturbation. In order to solve nonlinear ill-posed problems, we investigate an iteratively-regularized simplified version of the Homotopy perturbation approach in this study. We examine the method’s thorough convergence analysis under typical circumstances, focusing on the nonlinearity and the convergence rate under a Hölder-type source condition. Lastly, numerical simulations are run to confirm the method’s effectiveness.

Key words and phrases:
Nonlinear ill-posed operator equations, Iterative regularization methods, Nonlinear operators on Hilbert spaces, Convergence analysis, Homotopy perturbation method
2005 Mathematics Subject Classification:
65J15, 65J22, 65F22, 47J06
Department of Applied Science and Humanities, Government Engineering College Kaimur, Bihar Engineering University, DSTTE Patna, Bihar, India, email: sharaddixit71@gmail.com.

1. Introduction

There are numerous scientific and engineering applications that can lead to inverse problems. When inverse issues are formulated mathematically, they typically result in ill-posedness in the sense of Hadamard. Therefore, to find a stable approximation solution for the inverse issue, the regularization method is necessary. Numerous regularization techniques, including the Gauss-Newton iterative approach, the Thikhonov method, the Levrentiv iterative method, and the Levenberg Marquardt method, have been developed in the literature to address issues of this nature in Hilbert spaces (see, for example, [2], [3], [6], [10], [12], [16], [17], and [18]). One of the most popular iterative techniques for resolving nonlinear ill-posed problems in Hilbert spaces is the Landweber technique (cf. [4], [8], [11], [21], and [22]). Compared to other regularization techniques, this one is simpler to implement. A detailed explanation of this technique for the linear case can be found in [5].

In order to comprehend this approach, let us look at the abstract operator equation

(1) F(x)=y,

In this equation, F:D(F)XY is a nonlinear operator on D(F), and X and Y are Hilbert spaces with inner product .,. and norm . respectively. These operators can always be recognized by their context. F(x) represents the Fréchet derivative of F at x. Assume for the moment that x is the exact solution (which does not need to be unique) to (1.1). We are mainly interested in problems of the form (1) for which the solution x does not depend continuously on the data y. In real-world applications, precise data might not be accessible. Therefore, instead of using the actual data y, we use the available perturbed data with

(2) yδyδ,

where δ>0 is the noise level.

Given that F is locally scaled and Fréchet differentiable, let

(3) F(x)1,xBρ(x0),

where x0 is the initial guess for the exact solution and ρ>0. Next, Hanke et al. [8] examined the standard Landweber iteration for the non-linear case, which is as follows:

(4) xn+1δ=xnδF(xnδ)(F(xnδ)yδ),

where F(x) indicates the adjoint of the Fréchet derivative F(x) for xBρ(x0), and x0 is the initial guess for the exact solution. To examine technique (4), they took into account the following tangential type nonlinearity condition: With η<12 and x,xBρ(x0)D(F),

(5) F(x)F(x)F(x)(xx)ηF(x)F(x).

The method’s convergence analysis was conducted with the condition (5). For 0nnδ, the iteration is terminated at nδ by the generalised discrepancy principle

(6) F(xnδδ)yδτδ<F(xnδ)yδ,for  0nnδ,

where τ>21+η12η>2 is the positive constant dependent on η. Under the subsequent Hölder-type source condition, they were able to determine the rate of convergence:

(7) x0x=(F(x)F(x))νw,wX,  0<ν12.

To determine the rate of convergence for the approach (4), they also needed the following features of F, as the assumption (7) is insufficient:

(8) F(x)=RxxF(x),x,xBρ(x0),

where {Rxx:x,xBρ(x0)} is the family of bounded linear operators Rxx:YY such that

(9) RxxICxx,

where C is a positive constant.

Li Cao, Bo Han, and Wei Wang initially created homotopy perturbation iteration for nonlinear ill-posed problems in Hilbert spaces (cf. [14], [15]). The main idea behind it is to integrate the homotopy methodology with the standard perturbation method, and incorporate an embedding homotopy parameter. With Tn=F(xnδ) as the notation The formula for the N-order homotopy perturbation iteration technique

(10) xn+1δ=xnδj=1N(ITnTn)j1Tn(F(xnδ)yδ).

Notably, (10) can alternatively be understood as the Nsteps conventional Landweber iteration for resolving the linearized issue [9] as follows: F(xnδ)+Tn(xxnδ)=yδ. It is possible to obtain the classical Landweber iteration (4) by using the one-order approximation truncation (N = 1). The homotopy perturbation iteration in [14] can be produced using the two-order approximation truncation (N = 2):

(11) xn+1δ=xnδ(2ITnTn)Tn(F(xnδ)yδ).

It is demonstrated that, in comparison to (4), just half-time is required for (11). It was then successfully used for the inversion of the well log restricted seismic waveform [7]. The convergence study in [15] was conducted with respect to the stopping rule (6) and the nonlinearity condition (5). Additionally, they calculated the convergence rate based on assumption (7), (8) and (9).

It should be emphasised that each repetition step of the procedures in (4) and (11) necessitates the computation of the Fréchet derivative. Under the source condition (7), which is dependent upon the unknown answer x, the rate of convergence for each of the aforementioned approaches has been determined. Numerous scholars have explored various versions of the simplified iterative approach and have solved this issue (cf. [11], [16], [17], [18], [19] and [20]). The computation cost of the technique decreases dramatically when compared to the simplified version of the method, which just computes the Fréchet derivative at the first guess x0 for the exact solution x.

Driven by this benefit, we present a simplified version of the Homotopy perturbation iteration (11), denoted as

(12) xn+1δ=xnδ(2IA0A0)A0(F(xnδ)yδ),

where A0=F(x0) and A0 indicate adjoint of the operator A0. While the approach (11) includes the derivative at each iterate step xn, our method just uses the Fréchet derivative at the first guess x0. Thus, our approach simplifies the assumptions made in [14] and [15] while simultaneously reducing the computational cost. In this paper, we will examine the convergence analysis of the method (12), using the Morozov type discrepancy principle and appropriate assumptions on the nonlinear operator F. The method’s rate of convergence under the Hölder-type source condition will also be examined. Finally, we will provide a numerical example to validate our approach.

2. Convergence Analysis of the Method

To prove the method’s convergence, we utilise the following assumptions on the nonlinear operator F.

Assumption 1.

(i) For every xBρ(x0)D(F), where x is the solution to (1), there exists ρ>0.

(ii) The nonlinear operator is scaled appropriately, i.e.,

(13) F(x)12,xBρ(x0)

holds.

(iii) The local property

(14) F(x)F(x)F(x0)(xx)ηF(x)F(x),

is satisfied by the operator F in a ball Bρ(x0), where η<1 and x,xBρ(x0).

A number of publications utilise assumptions similar to 1 (iii) for the convergence analysis of ill-posed equations (cf. [8], [12], [16], [17], [19], [20], [22]). Assumption 1 (iii) can be understood as the tangential cone condition on the operator F.

From equation (14), the triangle inequality yields the following immediately: for every x,xBρ(x0)

(15) 11+ηF(x0)(xx)F(x)F(x)11ηF(x0)(xx).

We obtain the inequality from (13)

(16) IF(x0)F(x0)1.

When dealing with noisy data, the iterations xn are unable to converge but can nevertheless yield a stable approximation of x, as long as they are terminated using the Morozov-type stopping criterion after nδ steps, i.e.,

(17) F(xnδδ)yδτδ<F(xnδ)yδ,0n<nδ

where τ>2 is a positive constant depending on η.

The monotonicity of the iteration error is shown by the following theorem.

Theorem 1.

Let us assume that x is a solution of (1) in Bρ(x0), and for the situation of the perturbed data yδ satisfying (2), the iteration is ended after nδ steps in accordance with the stopping rule (17), where τ>8(η+1)38η. If equations (13), (14) and (16) are true, with 0<η<38, we have

(18) xxn+1δxxnδ,0nnδ,

and if δ=0,

(19) n=0F(xn)y2<.
Proof.

Given A0=F(x0), sn=F(xnδ)yδ, and 0n<nδ, we may infer by induction that xnδBρ(x0) from (12), (13), (14) and (16). Consider

xxn+1δ2xxnδ2=
=xn+1δxkδ2+2xxnδ,xnδxn+1δ
=(A0A02I)A0sn2+2xxnδ,(2IA0A0)A0(F(xn)yδ)
=(A0A02I)A0sn2+2F(xnδ)F(x)A0(xnδx)+yyδ,snδ
+2F(xnδ)F(x)A0(xnδx)+yyδ,(IA0A0)snδ
2snδ,snδ+2snδ,(A0A0I)snδ
4δ(1+η)snδ+(4η32)snδ2.

The right-hand side is negative due to (2.5) by the stated condition, 0<η<38, τ>8(η+1)38η, and n<nδ. We have confirmed (18) and demonstrated the monotonically declining nature of the iteration error. In fact, we have confirmed that the inequality

xxn+12+(324η)F(xn)y2xxn2

is true for every n𝒩0 if δ=0.
Consequently, using induction, we obtain

n=0F(xn)y2238ηxx02.

The proof is now complete. ∎

Remark 2.

In the event that δ0, we can demonstrate that

(20) nδ(τδ)2n=0nδ1F(xnδ)yδ22τ(38η)τ8(1+η)xx02.

Thus, a well-defined stopping index nδ< is determined using the discrepancy principle (17) with τ>8(η+1)38η.

Theorem 3.

If (1) can be solved in Bρ(x0) and 1 holds, then xn converges to a solution xBρ(x0).

Proof.

Let x be any solution of (1) in Bρ(x0), and put

(21) rn=xxn.

rn is monotonically decreasing to some ϵ0 according to Theorem 1. Next, we demonstrate that rn is a Cauchy sequence. In the case of in, we select m such that imn and

(22) F(xm)yF(xj)y,nji.

Firstly, we have

(23) rirnrirm+rmrn

and

rirm2 = 2rmri,rm+ri2rm2,
(24) rmrn2 = 2rmrn,rm+rn2rm2.

The final two terms of (2) on both right sides converge to zero for n. Now, we use equations (12) and (15) to demonstrate how, as n approaches , rmrn,rm also reduces to zero:

|rmrn,rm|=
=|j=nm1(2IA0A0)A0(F(xj)y),rm|
j=nm1|(2IA0A0)(F(xj)y),A0(xxm)
j=nm1(2IA0A0)(F(xj)y)A0(xxj+xjxm)
j=nm1(2IA0A0)(F(xj)y)(A0(xxj)+A0(xjxm))
(1+η)j=nm1(2IA0A0)(F(xj)y)(yF(xj)+F(xm)F(xj))
(1+η)j=nm1(2IA0A0)(F(xj)y)(2yF(xj)+F(xm)y)
3(1+η)j=nm1(2IA0A0)(F(xj)y)F(xj)y
6(1+η)j=nm1(F(xj)y))2.

Likewise, it may be demonstrated that

|rirm,rm|6(1+η)j=mi1(F(xj)y))2.

From (19) it can be inferred that when n, the right-hand side of (2) goes to zero, and from (23), we may deduce that rn and hence xn are Cauchy sequences. We designate x as the limit of xn and observe that, as n approaches , the residuals F(xn)y converge to zero, indicating that x is a solution of (1). ∎

Our subsequent findings indicate that the Landweber iteration becomes a regularization technique as a result of this stopping rule.

Theorem 4.

Let the assumptions of Theorem 3 hold. If yδ satisfies (2) and the iteration (12) is stopped in accordance with the stopping rule (17), then the iterates xnδδ converge to the solution of (1) as δ0.

The proof has a resemblance to [8, Theorem 2.4].

3. Convergence rates

We calculate the suggested iteration’s rate of convergence in this section. The rate of convergence for the iteration (11) for the source conditions (7) was found in [15]. We observe that, from a practical standpoint, it is quite challenging to validate such assumptions [23]. We use the following source condition to find the convergent rate of the method:

(25) xx0=(F(x0)F(x0))νw,ν>0,wX,

where w is small enough.

These approximations will be applied in this section to get the method’s convergence rate outcome.

Lemma 5 (cf. [12]).

Let A:XY be a bounded linear operator with the property A<1 and let ν[0,1]. Then

  1. (i)

    (IAA)n(AA)ν(n+1)ν,

  2. (ii)

    (IAA)nA(n+1)12,

  3. (iii)

    i=0n1(IAA)i(AA)νn1ν.

Lemma 6 (cf. [12]).

Assume p and q are positive. Then, independent of n, there is a positive constant c(p,q) such that

i=0n1(i+1)p(ni)qc(p,q)(n+1)1pq{1max{p,q}<1log(n+1)max{p,q}=1(n+1)max{p,q}1,max{p,q}>1.
Theorem 7.

Assume that F satisfies (13) and (14), that yδ satisfies (2), and that problem (1) has a solution in Bρ(x0). A positive constant relying exclusively on ν exists if xx0 satisfies (25) with 0<ν1 and w is small enough. This can be demonstrated by

(26) xxnδw(n+1)ν,
(27) A0enδw(n+1)ν12,

for 0n<nδ. As previously, nδ is the stopping index of the discrepancy principle (17) in this case, where τ>8(η+1)38η. For all n0, (26) and (27) hold in the case of exact data (δ=0).

Proof.

The iteration (12) is well-defined according to Theorem 1 since iterates xnδ, where 0nnδ, always remain in Bρ(x0)D(F). Furthermore, when δ>0, the stopping index nδ is finite. Substitute enδ:=xxnδ. Given 0n<nδ, we get the representation from (12)

en+1δ =enδ(xn+1δxnδ)
=(I2A0A0)enδ2A0(yF(xnδ)A0(xxnδ))
+2A0(yyδ)+A0A0A0(yδF(xnδ))
=(I2A0A0)enδ+2A0znδ+2A0(yyδ)+A0A0A0rnδ,

where znδ=(yF(xnδ)A0(xxnδ)) and rnδ=yδF(xnδ).

This produces the closed expression for the error for 0n<nδ

enδ =(I2A0A0)ne0+j=0n1(I2A0A0)j2A0znj1δ+
+j=0n1(I2A0A0)j2A0(yyδ)
(28) +j=0n1(I2A0A0)j(A0A0)A0rnj1δ,

and consequently

A0enδ =(I2A0A0)nA0e0+j=0n1(I2A0A0)j2A0A0znj1δ+
+j=0n1(I2A0A0)j2A0A0(yyδ)
+j=0n1(I2A0A0)jA0(A0A0)A0rnj1δ.

We employ induction to demonstrate the result for 0n<nδ. The proof is simple for n=0, and we take it for granted that the result holds for any j such that 0j<n, where n<nδ.

For n<nδ,

(29) enδ (I2A0A0)n(A0A0)νw+j=0n1(I2A0A0)j2A0znj1δ
(30) +j=0n1(I2A0A0)j2A0(yyδ)
(31) +j=0n1(I2A0A0)j(A0A0)A0rnj1δ
(32) 2ν(n+1)νw+j=0n12(j+1)1/2znj1δ+2nδ
(33) +j=0n123/2(j+1)3/2rnj1δ.

and

A0enδ (I2A0A0)nA0e0+j=0n1(I2A0A0)j2A0A0znj1δ
+j=0n1(I2A0A0)j2A0A0(yyδ)
+j=0n1(I2A0A0)jA0(A0A0)A0rnj1δ
(34) 2ν12(n+1)ν12w
+j=0n1(j+1)1znj1δ+δ+22j=0n1(j+1)2rnj1δ

Using the triangle inequality, equations (15), (17) and the induction assumption, we now obtain

(35) yδF(xnδ)2yF(xnδ)21ηA0(xxnδ)21ηw(n+1)ν12,

and

(36) znδ ηyF(xnδ)
η1ηA0enδ
η1ηw(n+1)ν12

Consequently,

j=0n12(j+1)12znj1δ 2η1ηwj=0n1(j+1)12(nj)ν12

and

j=0n1232(j+1)32rnj1δ 2121ηwj=0n1(j+1)32(nj)ν12

Thus, applying Lemma 6, we obtain

(37) j=0n12(j+1)12znj1δaνw(n+1)ν

and

(38) j=0n1232(j+1)32rnj1δbνw(n+1)ν

where the constants aν>0 and bν>0 are dependent on ν. Therefore

enδ 2ν(n+1)νw+aνw(n+1)ν+2nδ+bνw(n+1)ν
(2ν+aν+bν)w(n+1)ν+2nδ.

Likewise, one may demonstrate that

A0enδ
2ν12(n+1)ν12w+aν~w(n+1)ν12+δ+bν~w(n+1)ν12
(39) (2ν12+aν~+bν~)w(n+1)ν12+δ.

Here, aν~>0 and bν~>0 depends on ν.

Owing to (17), τ>8(η+1)38η, as follows:

8(η+1)38ηδτδF(xnδ)yδ11ηA0enδ+δ.

Therefore, using the above result (3), we obtain

8(η+1)38ηδ 11η(2ν12+aν~+bν~)w(n+1)ν12+2η1ηδ.

This would provide

(40) δ(38η)16η2+19η+2(2ν12+aν~+bν~)w(n+1)ν12.

Therefore,

(41) enδw(n+1)ν,

and

(42) A0enδw(n+1)ν12,

where

=max( (1+(38η)16η2+19η+2)(2ν12+aν~+bν~),
2ν+aν+bν+2((38η)16η2+19η+2)(2ν12+aν~+bν~)).

Theorem 8.

Assuming the conditions of Theorem 7, we obtain

(43) nδ1(wδ)22ν+1,

and

(44) xxnδδ2w12ν+1δ2ν2ν+1,

where 1 and 2>0 are positive constant that depends exclusively on ν.

Proof.

Equation (3) allows us to write

enδδ =(I2A0A0)nδe0+
+j=0nδ1(I2A0A0)j2A0znδj1δ+j=0nδ1(I2A0A0)j2A0(yyδ)
+j=0nδ1(I2A0A0)j(A0A0)A0rnδj1δ,
=(A0A0)νWnδ+j=0nδ1(I2A0A0)j2A0(yyδ),

where Wnδ=(I2A0A0)nδw+j=0nδ1212+ν(I2A0A0)j(2A0A0)12νz~nδj1δ+j=0nδ1232+ν(I2A0A0)j(2A0A0)32νr~nδj1δ, we can write z~jδ=zjδ and r~jδ=rjδ, j=0,1,.,nδ1.

Wnδ (I2A0A0)nδw+j=0nδ1212+ν(I2A0A0)j(2A0A0)12νznδj1δ
+j=0nδ1232+ν(I2A0A0)j(2A0A0)32νrnδj1δ
(nδ+1)0w+j=0nδ1212+ν(j+1)ν12znδj1δ
+j=0nδ1232+ν(j+1)ν32rnδj1δ

We have

j=0nδ1212+ν(j+1)ν12znδj1δ212+νη1ηwj=0nδ1(j+1)ν12(nδj)ν12

and

j=0nδ1232+ν(j+1)ν32rnδj1δ212+ν1ηwj=0nδ1(j+1)ν32(nδj)ν12

By applying Lemma 6,

j=0nδ1212+ν(j+1)ν12znδj1δcν(nδ+1)0w

and

j=0nδ1232+ν(j+1)ν32rnδj1δcν~(nδ+1)1w

Since we are aware that w must be small, we take w1. Thus, we have

Wnδ (nδ+1)0w+cν(nδ+1)0w+cν~(nδ+1)1w
(1+cν+cν~)w.

Therefore,

A0(A0A0)νWnδ = A0enδδ(I(IA0A0)k(yδy)
A0enδδ+δ
F(xnδF(x)A0enδδ+F(xnδδ)F(x)+δ
(η+1)yF(xnδδ)+δ
((η+1)(τ+1)+1)δ.

Applying the inequality of interpolation, we obtain

(A0A0)νWnδ (((η+1)(τ+1)+1)δ)2ν2ν+1((1+cν+cν~)w)12ν+1
𝔇δ2ν2ν+1w12ν+1,

where 𝔇 is positive constant. For nδ=0,

(45) enδδ𝔇w12ν+1δ2ν2ν+1,

and when nδ>0, we apply (40) with n=nδ1 to obtain

(46) δΓwnδν12,

where Γ=(38η)16η2+19η+2(2ν12+aν~+bν~), and hence

nδ1(wδ)22ν+1,

where 1=Γ22ν+1.

Using the outcome that we obtain,

enδδ (A0A0)νWnδ+2nnδδ
𝔇w12ν+1δ2ν2ν+1+21w12ν+1δ112ν+1
2w12ν+1δ2ν2ν+1,

where 2=𝔇+21. ∎

Remark 9.

In contrast to the suggested simplified HPI (12), the HPI (11) taken into consideration in [15] necessitates the extra condition (8) and (9) on the non-linear operator F in order to show the convergence rate conclusion. The convergence rate result cannot be maintained in practical problems if the operator does not meet condition (8) and (9). Here, we give an example that does not meet assumptions (8) and (9).

4. Numerical Example

This section examines a numerical example to demonstrate the adaptability of the suggested simplified homotopy method. Matlab is used for numerical calculations.

Here, we look at the nonlinear model problem, which is recovering the parameter estimation problem’s diffusion term. Let fL2(Ω) and ΩRd(d=1,2) represent an open bounded domain with a Lipschitz border Γ. We examine calculating the diffusion coefficient c in equation

(47) {(c(t)u(t)t)t=f(t),inΩu=0,onΓ.

We consider that L2(Ω) contains the exact diffusion coefficient c. In the domain D(F):={cH1(Ω):c(x)c¯>0}, there is a solution u=u(c) in H1(Ω) for every c in the domain. We may define the nonlinear operator F:X=L2(Ω)Y=L2(Ω) with F(c)=u(c) by using the Sobolev embedding H1(Ω)L2((Ω). It was demonstrated in [6], [8] and [21] that this operator is Fréchet differentiable with

(48) F(c)h =Tc1[(hut(c))t],F(c)w=B1[ut(c)(Tc1(w))t],
cD(F),h,wL2(Ω),

where Tc:H2(Ω)H01(Ω)L2(Ω) is defined by Tcu:=(c(t)ut(t))t and B:D(B):={ΨH2(Ω):Ψ=0 onΓ}L2(Ω) is defined as BΨ:=Ψ′′+Ψ; note that B1 is the adjoint of the embedding operator from H1L2.

For every c in Bρ(c0), if ut(c)b, b>0, then (5) holds locally according to Lemma 2.6 in [24], which guarantees the convergence of the HPI (11). Assumption (14) is a particular instance of (5), which guarantees the convergence of the proposed iteration (12) according to the Theorem 3 and Theorem 4.

Convergence rate results do not hold for the HPI (11) since F, regrettably, does not satisfy assumption (8) and (9) (see [11], [24]). We do not need the assumption (8) for the convergence rate results in the suggested simplified HPI (12).

From the iteration (12), for all n, xn+1δxnδ(A0). Therefore, in particular xnδδx0(A0)δ, which means limδ0xnδδx0(A0). Hence

(49) xx0(A0)

Therefore, assumptions (25) is satisfied due to both assumption (14) and (49)

Assume that Ω=[0,1]. The right-hand side of differential equation (47) is explicitly determined using exact information u(t). By introducing random noise to the precise data u(t) at a specified noise level δ, the perturbed data uδ(t) satisfying uδ(t)u(t)δ is obtained. On a uniform grid with different grid points (N), the differential equations involving the Fréchet differentiable (48) are solved using the finite element method using linear splines. With τ=5, the iteration is terminated using the stopping criteria (17).

We use the function

f(t)=et(1+12sin(2πt)+πcos(2πt))+(e1)πcos(2πt),

and the exact data u(t)=et+(1e)t1, so the exact solution is c(t)=1+12sin(2πt).

We start the iteration with the initial guess

c0(t)=1+12sin(2πt)+200t2(1t)2(0.25t)(0.75t).

Then, according to [22]: c0c=200t2(1t)2(0.25t)(0.75t)R(F(x)).

We employ both the suggested simplified Homotopy perturbation iteration (12) and the Homotopy perturbation iteration (11). Several values of δ and grid point N are chosen in order to show how the convergence rates depend on the noise level. The numerical results are given in Table 1 and Table 2. Fig. 1Fig. 6 respectively, represent graphical results for the methods (11) and (12).

Obviously, Fig. 1Fig. 6 show that the approximate effect of simplified Homotopy perturbation iteration is better. Compared with Homotopy perturbation iteration, we discover in Table 1 and Table 2 that the error norm of simplified Homotopy perturbation iteration is much smaller within less iteration steps and less computational time, and also the convergent rate of simplified Homotopy perturbation iteration (12) is faster than Homotopy perturbation iteration (11).

δ N nδ Error=cnδδc Time(s)
0.01 17 424 1.3377 1.5375
0.005 17 4853 0.9980 16.9290
0.001 17 53252 0.2844 189.9022
0.0005 17 134204 0.1877 421.6156
0.01 33 1972 1.6762 8.7951
0.005 33 8511 1.1900 31.0274
0.01 65 4457 2.0520 21.7195
0.005 65 13948 1.3607 75.8592
Table 1. The results of the Simplified HPI (12).
δ N nδ Error=cnδδc Time(s)
0.01 17 510 1.3569 1.8747
0.005 17 5970 1.0488 18.4612
0.001 17 83441 0.3475 265.8005
0.0005 17 189874 0.2529 628.5853
0.01 33 2350 1.7366 9.1521
0.005 33 10628 1.2667 42.0180
0.01 65 5312 2.1569 26.6291
0.005 65 18020 1.4721 91.1350
Table 2. The results of the HPI (11).
Refer to caption
Figure 1. Solution when N=17 and δ=0.01
Refer to caption
Figure 2. Solution when N=17 and δ=0.005
Refer to caption
Figure 3. Solution when N=17 and δ=0.001
Refer to caption
Figure 4. Solution when N=17 and δ=0.0005
Refer to caption
Figure 5. Solution when N=33 and δ=0.005
Refer to caption
Figure 6. Solution when N=65 and δ=0.005

5. Conclusion

In this study, we have performed the analysis and evaluated a simplified version of the Homotopy perturbation iteration (11). The suggested technique has the advantage of just computing the Fréchet derivative once, rather than at each iteration step. The calculation in this approach becomes simpler than the traditional Homotopy perturbation iterations, as the iteration (12) and the source condition (25) only entail the Fréchet derivative at initial guess x0 to the exact solution x of (1). The suggested iteration is competitive in terms of reducing the overall computing time as well as error when compared to the traditional Homotopy perturbation iteration as demonstrated by the numerical example.

References