Convergence of the θ-Euler-Maruyama method for a class of Stochastic Volterra Integro-Differential Equations

Samiha Mouchir and Abdeldjalil Slama
(Date: May 29, 2024; accepted: November 14, 2024; published online: December 18, 2024.)
Abstract.

This paper addresses the convergence analysis of the θ-Euler-Maruyama method for a class of stochastic Volterra integro-differential equations (SVIDEs). At first, we discuss the existence, uniqueness, boundedness and Hölder continuity of the theoretical solution. Subsequently, the strong convergence order of the θ-Euler-Maruyama approach for SVIDEs is established. Finally, we provided numerical examples to illustrate the theoretical results.

Key words and phrases:
Stochastic Volterra integro-differential equations, Θ-Euler-Maruyama method, strong convergence, Hölder continuity.
2005 Mathematics Subject Classification:
65C30, 60B10, 65L20.
Laboratory of Mathematics, Modeling and Applications (LaMMA), University of Adrar, Adrar, Algeria. mou.samiha @univ-adrar.edu.dz, aslama@univ-adrar.edu.dz

1. Introduction

Stochastic differential equations (SDEs) have attracted significant attention and are currently emerging as a modeling tool in various scientific fields, including but not limited to telecommunications (see [15]), economics, finance (see [5]), biology, chemistry, and quantum field theory.

The Volterra integral equations (VIEs) were proposed by Vito Volterra, which Traian Lalescu later studied in his 1908 thesis ”Sur les équations de Volterra”, written under the direction of Émile Picard. Volterra integral equations find application in viscoelastic materials, fluid mechanics, and demography (see, e.g., [10],[3], [17],[8],[7]). Stochastic Volterra integral equations (SVIEs) are an extension of ordinary Volterra integral equations to include random noise, making them suitable for modeling systems with stochastic components. SVIEs find applications in various fields, including mathematical finance, biology, physics and engineering. For example, in finance, SVIEs are used to model the evolution of financial asset prices over time, taking into account the stochastic nature of market movements. In biology, they can be used to describe population dynamics subject to random environmental factors. Therefore, in recent years, SVIEs have attracted the attention of many researchers. For instance (see [10],[17],[8],[7]). The exact methods for solving stochastic differential equations (SDEs) involve addressing current challenges in the field (cf. [6], [16], [3], [11], [4], [17], [13], [7]). While there are many analytical methods available, the complexity of these equations makes it difficult to obtain exact solutions. Among the numerical methods are the Milstein method, Runge-Kutta method see [1], Euler-Maruyama method, stochastic theta method, and others (see [10], [9],[12],[2], [17]). Zong et al. [18] conducted a study on two classes of theta-Milstein schemes for stochastic differential equations in terms of convergence and stability.

Recently, Deng et al. [4] examined the semi-implicit Euler method for non-linear time changed stochastic diferential equations. Wang et al. [14] investigated the stochastic theta methods (STMs) for stochastic differential equations (SDEs) with non-global Lipschitz drift and diffusion coefficients. Zhang et al. [17] examined the Euler-Maruyama (EM) method’s numerical analysis of the following generalized SVIDEs:

dY(t)= f(Y(t),0tK1(t,s)Y(s)𝑑s,0tσ1(t,s)Y(s)𝑑B(s))dt
+g(Y(t),0tK2(t,s)Y(s)𝑑s,0tσ2(t,s)Y(s)𝑑B(s))dB(t).

Lan et al. [7] presented the θ-EM method corresponding to the following SVIDEs:

dX(t)=f(X(t),0tG(ts)X(s)𝑑s)dt+g(X(t),0tH(ts)X(s)𝑑s)dB(t).

Inspired and motivated by the above works [4, 14, 17, 7], in this paper, we study the strong convergence of the θ-Euler-Maruyama method for a class of stochastic Volterra integro-differential equations (SVIDEs) as follows:

(1.1) dY(t)=f(Y(t),0tσ1(t,s)Y(s)𝑑B(s))dt+g(Y(t),0tσ2(t,s)Y(s)𝑑s)dB(t),t[0,T],

with initial condition Y(0)=Y0, where f:× and g:× are given functions. The kernels σi:D are continuous on D:={(t,s):0stT} with the norm σi=max(t,s)D|σi(t,s)| for i=1,2. Y(t) is a stochastic processes defined on probability space (Ω,,), B(t) is a standard Brownian motion (1-dimensional Brownian motion) defined on the same probability space. And 𝔼Y02<.

The structure of this paper is as follows: We introduce some fundamental notations and preliminaries in Section 2. We then present the definition of the solution of equation (1.1) and investigate the existence, uniqueness, boundedness and Hölder continuity of the analytic solution in Section 3. The θ-EM method for equation (1.1) is presented, and its order of convergence is taken into account in Section 4. Finally, we provide numerical examples in Section 5 to illustrate the theoretical results.

2. Preliminaries

Let (Ω,,(t)t0,) be a complete probability space with a filtration (t)t0 satisfying the usual conditions , and let 𝔼 denote the expectation corresponding to . A 1-dimensional Brownian motion defined on the probability space is denoted by B(t). Let L2([0,T],) by the family of Borel measurable functions Φ:[0,T] such that for every T>0,0T|Φ(t)|2𝑑t<. We denote 2([0,T],) the family of -valued t-adapted processes {Φ(t)}t[0,T] such that 0T|Φ(t)|2𝑑t<. 2([0,T],) be the family of processes tadapted {Φ(t)}t[0,T]2([0,T],) such that 𝔼[0T|Φ(t)|2𝑑t]<. For a,b,ab:=min{a,b},andab:=max{a,b}. If 𝔻 is a subset of Ω, the indicator function is denoted by 1𝔻. We can write the integral of equation (1.1) as follows:

Y(t) =Y(0)+0tf(Y(u),0uσ1(u,s)Y(s)𝑑B(s))𝑑u
(2.2) +0tg(Y(u),0uσ2(u,s)Y(s)𝑑s)𝑑B(u).

We will introduce the definition of the solution, we assume that Y(.)2([0,T],) where

F(t):= f(Y(t),0tσ1(t,s)Y(s)𝑑B(s)),
G(t):= g(Y(t),0tσ2(t,s)Y(s)𝑑s).
Definition 2.1.

A solution of (2) is a continuous stochastic process {Y(t)}t[0,T] with values in satisfies the following conditions:

  1. i)

    Y(.)2([0,T],),F(.)1([0,T],) and G(.)2([0,T],),

  2. ii)

    (2) holds for all t[0,T] with probability 1. The solution {Y(t)} is said to be unique if there is exists other solution {Y¯(t)} such that {Y(t)}={Y¯(t)} for all t[0,T], i.e.,

    {Y(t)=Y¯(t)for allt[0,T]}=1.
Definition 2.2.

Let 0<δ1. A stochastic process Y(t,ω):[0,T]×Ω is referred to as Hölder continuous with exponent δ>0 if a constant M exists such that

𝔼|Y(t)Y(r)|2M|tr|2δ,t,r[0,T].

In this article, we propose the following hypotheses.

  1. (1)

    (H1) (Lipschiz condition). Assume that there exist a positive constant K such that

    |f(x,y)f(x,y)|2|g(x,y)g(x,y)|2K(|xx|2+|yy|2),

    for x,y,x,y.

  2. (2)

    (H2) (Linear growth condition). For x,y

    |f(x,y)|2|g(x,y)|2K(1+|x|2+|y|2),

    where K=2(K|f(0,0)|2|g(0,0)|2).

  3. (3)

    (H3) (Mean value theorem). Assuming that the coefficients σiC1(D), for i=1,2. of (1.1) satisfy

    |σi(t,s)|2K′′,

    with K′′>0, for all (t,s)D, and

    |σi(u,s)σi(uh,s)|2=|σi(ξi,s)(u,uh)|2K′′h2,

    where ξi(u,uh).

The existence and uniqueness of the solution to (1.1) under hypothesis (H2) are demonstrated by the following theorem. The proof of this theorem is similar to the proof in [17], which pertains to the case where Θ=0.

3. Theoretical analysis of the class of SVIDE

In this section, we present the theoretical results. The existence and uniqueness of the solution to (2) have been established. Additionally, we verified the Hölder continuity condition for the analytical solutions.

3.1. The existence and uniqueness of the analytical solution

We now discuss the theory of existence and uniqueness of the solution of the equation (2). We first present the following lemma.

Lemma 3.1.

Assume that (H2) is satisfied. If Y(t)2([0,T],) is a solution of SVIDEs (1.1), and such that

(3.3) 𝔼[Y(t)]2M0,t[0,T],

where M0 depends on σ1,σ2,T,K and Y0.

Proof.

For every integer n1,τn be a stopping time such that

τn=Tinf{t[0,T]:|Y(t)|n}.

Evidently, τnT a.s. by letting n. Define Yn(t):=Y(tτn) for t[0,T]. It can be verified that Yn(t) satisfies.

Yn(t) =Y(0)+0tf(Yn(u),0uσ1(u,s)Yn(s)𝟏[0,τn](s)𝑑B(s))𝟏[0,τn](u)𝑑u
+0tg(Yn(u),0uσ2(u,s)Yn(s)𝟏[0,τn](s)𝑑s)𝟏[0,τn](u)𝑑B(u),t[0,T].

By Cauchy’s inequality, the Itô isometry, and the inequality (x+y+z)23(x2+y2+z2), we obtain that

|Yn(t)|2 3|Y0|2+3t0t|f(Yn(u),0uσ1(u,s)Yn(s)𝟏[0,τn](s)𝑑B(s))𝟏[0,τn](u)|2𝑑u
+30t|g(Yn(u),0uσ2(u,s)Yn(s)𝟏[0,τn](s)ds,)𝟏[0,τn](u)|2du.

Using Cauchy’s inequality and the Itô isometry, we show that

(3.4) 𝔼|Yn(t)|23𝔼|Y0|+3TF+3G,

where

F:=𝔼[0t|f(Yn(u),0uσ1(u,s)Yn(s)𝟏[0,τn](s)𝑑B(s))𝟏[0,τn](u)|2𝑑u],

and

G:=𝔼[0t|g(Yn(u),0uσ2(u,s)Yn(s)𝟏[0,τn](s)ds,)𝟏[0,τn](u)|2du].

First, we calculate F, using Cauchy’s inequality and assumption (H2), we get

(3.5) FK0t[1+𝔼|Yn(u)|2+𝔼|0uσ1(u,s)Yn(s)𝑑B(s)|2]𝑑uK0t[1+𝔼|Yn(u)|2+σ120u𝔼|Yn(s)ds|2]𝑑uKT+K(1+σ12T)0t𝔼|Yn(s)|2𝑑s,

Second, we calculate G, by (H2) and Itô isometry, we get

(3.6) gK0t[1+𝔼|Yn(u)|2+𝔼|0uσ2(u,s)Yn(s)𝑑s|2]𝑑uK0t[1+𝔼|Yn(u)|2+σ22T0u𝔼|Yn(s)ds|2]𝑑uKT+K(1+σ22T2)0t𝔼|Yn(s)|2𝑑s.

By substituting (3.5) and (3.6) into (3.4), we have

𝔼|Yn(t)|2 3𝔼|Y0|2+3T(KT+K(1+σ12T)0t𝔼|Yn(s)|2𝑑s)
+3(KT+K(1+σ22T2)0t𝔼|Yn(s)|2𝑑s)
3𝔼|Y0|2+3KT(T+1)+3KT(1+σ12T)0t𝔼|Yn(s)|2𝑑s
+3K(1+σ22T2)0t𝔼|Yn(s)|2𝑑s
=3𝔼|Y0|2+3KT(T+1)+3K(C1T+C2)0t𝔼|Yn(s)|2𝑑s,

where C1:=1+σ12T and C2:=1+σ22T2.

By using Gronwall’s inequality, we have

𝔼|Yn(t)|2M0,

where

M0:=3(𝔼|Y0|2+C3)exp(3TC),

and

C:=K(C1T+C2),C3=KT(T+1).

Since 𝔼|Y(tτn)|2M0, for t0, letting t, we conclude that τ=, that is,

𝔼[Y(t)]2M0.
Theorem 3.1.

Suppose (H1) holds. Then, there exists a unique solution Y(t)2([0,T],) to (1.1) and

𝔼[Y(t)]2M1fort[0,T],

where

M1:=3((1+CT)𝔼|Y0|2+C3)exp(3CT).
Proof.

We will divide the proof into two fundamental steps.

Step I. Uniqueness:

Let Y1(t) and Y2(t) be two solutions of (1.1). From Lemma 3.1 we have, Y1(t),Y2(t)2([0,T];). By (H1), Cauchy’s inequality and Itô isometry, we show that

𝔼|Y1(t)Y2(t)|2=2𝔼|0tf(Y1(u),0uσ1(u,s)Y1(s)𝑑B(s))𝑑u
0tf(Y2(u),0uσ1(u,s)Y2(s)dB(s))du|2
+2𝔼|0tg(Y1(u),0uσ2(u,s)Y1(s)𝑑s)𝑑B(u)
0tg(Y2(u),0uσ2(u,s)Y2(s)ds)dB(u)|2
2T0t𝔼|f(Y1(u)0uσ1(u,s)Y1(s)𝑑B(s))
f(Y2(u),0uσ1(u,s)Y2(s)dB(s))|2du
+20t𝔼|g(Y1(u),0uσ2(u,s)Y1(s)𝑑s)g(Y2(u),0uσ2(u,s)Y2(s)𝑑s)|2𝑑u
2TK0t(𝔼|Y1(u)Y2(u)|2+0u𝔼|σ1(u,s)(Y1(s)Y2(s))|2𝑑s)𝑑u
+2K0t(𝔼|Y1(u)Y2(u)|2+u0u𝔼|σ2(u,s)(Y1(s)Y2(s))|2𝑑s)𝑑u
2TK0t(𝔼|Y1(u)Y2(u)|2+σ120u𝔼|Y1(s)Y2(s)|2𝑑s)𝑑u
+2K0t(𝔼|Y1(u)Y2(u)|2+Tσ220u𝔼|Y1(s)Y2(s)|2𝑑s)𝑑u
2K(TC1+C2)0t𝔼|Y1(s)Y2(s)|2𝑑s
C0t𝔼|Y1(s)Y2(s)|2𝑑s.

Finally, from Gronwall’s inequality we conclude that

𝔼|Y1(t)Y2(t)|2=0,

which proves that Y1(t)=Y2(t) for every t[0,T].

Step II. Existence:

Let Y0(t)=Y0 and define the Picard approximation.

(3.7) Yn(t) =Y0+0tf(Yn1(u),0uσ1(u,s)Yn1(s)𝑑B(s))𝑑u
+0tg(Yn1(u),0uσ2(u,s)Yn1(s)𝑑s)𝑑B(u),

for t[0,T] and n=1,2,. It is evident that Y0(.)2([0,T];), and through induction, we also get Yn(.)2([0,T],). Using the proof of Lemma 3.1, we have

𝔼|Yn(t)|2 3𝔼|Y0|2+3𝔼|0tf(Yn1(u),0uσ1(u,s)Yn1(s)𝑑B(s))𝑑u|2
+3𝔼|0tg(Yn1(u),0uσ2(u,s)Yn1(s)𝑑s)𝑑B(u)|2
3𝔼|Y0|2+3T(KT+K(1+σ12T)0t𝔼|Yn1(s)|2𝑑s)
+3(KT+K(1+σ22T2)0t𝔼|Yn1(s)|2𝑑s)
= 3𝔼|Y0|2+3KT(T+1)+3K(C1T+C2)0t𝔼|Yn(s)|2𝑑s
= 3𝔼|Yn(t)|23(𝔼|Y0|2+C3)+3C0t𝔼|Yn1(s)|2𝑑s,

where C and C3 are those from Lemma 3.1. Thus, for each n1, we have

max1kn𝔼|Yk(t)|2 3(𝔼|Y0|2+C3)+3C0tmax1kn𝔼|Yk1(s)|2ds
3(𝔼|Y0|2+C3)+3C0t[𝔼|Y0|2+max1kn𝔼|Yk(s)|2]𝑑s
3((1+CT)𝔼|Y0|2+C3)+3C0tmax1kn𝔼|Yk(s)|2ds.

By the Gronwall’s inequality, we obtain

max1kn𝔼|Yk(t)|2M1,

where M1:=3((1+CT)𝔼|Y0|2+C3)exp(3CT). Since k is arbitrary, we conclude

(3.8) 𝔼|Yn(t)|2M1,fort[0,T],n1.

Similarly to the proof of Lemma 3.1, one has

𝔼|Y1(t)Y0(t)|2 =𝔼|Y1(t)Y0|2
2𝔼|0tf(Y0(u),0uσ1(u,s)Y0(s)𝑑B(s))𝑑u|2
+2𝔼|0tg(Y0(u),0uσ2(u,s)Y0(s)𝑑s)𝑑B(u)|2
2K(T(1+T)+(T(1+Tσ12)C1+(1+T2σ22)C2)𝔼|Y0|2):=C0,

and

𝔼|Y2(t)Y1(t)|2
2𝔼|0t(f(Y1(u),0uσ1(u,s)Y1(s)𝑑B(s))f(Y0(u),0uσ1(u,s)Y0(s)𝑑B(s)))𝑑u|2
+2𝔼|0t(g(Y1(u),0uσ2(u,s)Y1(s)𝑑s)g(Y0(u),0uσ2(u,s)Y0(s)𝑑s))𝑑B(u)|2
2T0tK(𝔼|Y1(u)Y0(u)|2+σ120u𝔼|Y1(s)Y0(s)|2𝑑s)
+2K(𝔼|Y1(u)Y0(u)|2+Tσ220u𝔼|Y1(s)Y0(s)|2𝑑s)du
K(TC1+C2)0t𝔼|Y1(s)Y0(s)|2𝑑s
2C0tC0𝑑s=2CC0T.

We claim that for n0,

(3.9) 𝔼|Yn(t)Yn1(t)|2C0(2Ct)n1(n1)!.

By induction, we need to show that (3.9) still holds for n+1. Note that

(3.10) |Yn+1(t)Yn(t)|2 2𝔼|0t[f(Yn(u),0uσ1(u,s)Yn(s)dB(s))
f(Yn1(u),0uσ1(u,s)Yn1(s)dB(s))]du|2
+2𝔼|0t[g(Yn(u),0uσ2(u,s)Yn(s)ds)
g(Yn1(u),0uσ2(u,s)Yn1(s)ds)]dB(u)|2.

Using (3.9), and similar to the proof of Lemma 3.1, we show that

𝔼|Yn+1(t)Yn(t)|2 2C0t𝔼|Yn(s)Yn1(s)|2𝑑s
2C0tC0(2Cs)n1(n1)!𝑑s
= 2CC0(2C)n1snn!|0t
= C0(2Ct)nn!.

By Chebyshev inequality, we get

{|Yn(t)Yn1(t)|2>12n}C0(2CT)n1(n1)!,

Since 0+C0(2CT)n1(n1)!<. Thus, applying the Borel-Cantelli lemma, we can show that

ωΩ,n0=n0(ω),|Yn(t)Yn1(t)|212n,fornn0.

With probability 1, it follows that,

Y0(t)+k=1n1|Yk(t)Yk1(t)|=Yn(t)

are convergent uniformly for t[0,T]. Y(t) denote the limit. Y(t) is obviously continuous and t-adapted. On the other hand, it can be seen from (3.9) that, {Yn(t)}n1 is a Cauchy sequence in 2([0,T],) for any t.

(𝔼|Yn(t)Ym(t)|2)12= Yn(t)Ym(t)L2
k=mn1Yn(t)Yn1(t)L2
k=mn1(C0(2CT)n1(n1)!)12.

Letting n,m, therefore

(𝔼|Yn(t)Ym(t)|2)120,

{Yn(t)}n1 is a Cauchy sequence in 2([0,T],). Therefore, we also have Yn(t)Y(t) in 2([0,T],).

𝔼|Y(t)|2M1,fort[0,T],

where M1 depends on σ1,σ2,T,K and Y0, resulting in Y()2([0,T];). It has to be demonstrated that Y(t) satisfies (2). Note that

𝔼|0tf(Yn(u),0uσ1(u,s)Yn(s)𝑑B(s))𝑑u0tf(Y(u),0uσ1(u,s)Yn(s)𝑑B(s))𝑑u|2
+𝔼|0tg(Yn(u),0uσ2(u,s)Yn(s)𝑑s)𝑑B(u)0tg(Y(u),0uσ2(u,s)Y(s)𝑑s)𝑑B(u)|2
C0t𝔼|Yn(s)Y(s)|2𝑑s.

Letting n in (3.7), we get

(3.11) Y(t) =Y0+0tf(Y(u),0uσ1(u,s)Y(s)𝑑B(s))𝑑u
+0tg(Y(u),0uσ2(u,s)Y(s)𝑑s)𝑑B(u).

The proof is complete.

3.2. Hölder continuity of the analytic solutions

We now show Hölder Continuity property by the analytical solution of SVIDEs (2).

Theorem 3.2.

Assume that (H2) holds. Then, the solution Y(t) is Hölder continuous with exponent δ=12.

Proof.

For 0r<tT,

Y(t)Y(r) =rtf(Y(u),0uσ1(u,s)Y(s)𝑑B(s))𝑑u
+rtg(Y(u),0uσ2(u,s)Y(s)𝑑s)𝑑B(u).

Using expectation, we obtain

(3.12) 𝔼|Y(t)Y(r)|2 2𝔼|rtf(Y(u),0uσ1(u,s)Y(s)𝑑B(s))𝑑u|2
+2𝔼|rt(Y(u),0uσ2(u,s)Y(s)𝑑s)𝑑B(u)|2.

Using Cauchy’s inequality, Itô isometry and (H2), we have

𝔼 |rtf(Y(u),0uσ1(u,s)Y(s)𝑑B(s))𝑑u|2
(tr)rtK[1+𝔼|Y(u)|2+𝔼|0uσ1(u,s)Y(s)𝑑B(s)|2]
(3.13) (tr)K[(tr)+rt𝔼|Y(s)|2𝑑s+Tσ12rt𝔼|Y(s)|2𝑑s]
(tr)K[(tr)+(1+Tσ12)C10t𝔼|Y(s)|2𝑑s]
KT(1+C1M1)(tr).

Also, we get

(3.14) 𝔼 |rtg(Y(u),0uσ2(u,s)Y(s)𝑑s)𝑑B(u)|2
rtK[1+𝔼|Y(u)|2+𝔼|0uσ2(u,s)Y(s)𝑑s|2]𝑑u
Krt[1+𝔼|Y(u)|2+Tσ220t𝔼|Y(s)|2𝑑s]𝑑u
Krt[1+(1+T2σ22)C2M1]𝑑u
K(1+C2M1)(tr).

By substituting (Proof) and (3.14) into (3.12), we have

𝔼|Y(t)Y(r)|2M|tr|,

where M:=2K[T(1+C1M1)+(1+C2M1)].

Consequently, Y(t), t[0,T] is Hölder continuous with exponent 1/2.

4. Numerical analysis of the class of SVIDE

Let Ih:={tn=nh,n=0,1,,N},I=[0,T]. For n=0,1,,N1, we have defined the numerical results of SVIDE

4.1. θ-Euler Maruyama method

We apply the θ-EM method to SVIDEs (LABEL:SDEf) (see [1][3] and references therein),

(4.1) Xn+1 =Xn+h[Θf(Xn+1,i=0n1σ1(tn+1,ti+1)Xi+1ΔBi+1)
+(1Θ)f(Xn,i=0n1σ1(tn,ti)XiΔBi)]
+g(Xn,i=0n1titi+1σ2(tn,s)Xi𝑑s)ΔBn,

where Θ[0,1] with initial data X0=Y0, where n, tn=nh,ΔBn=B(tn+1)B(tn). By induction, we rewrite (4.1) in the following form:

(4.2) Xn+1 =X0+j=0nh(Θf(Xj+1,i=0j1σ1(tj+1,ti+1)Xi+1ΔBi+1)
+(1Θ)f(Xj,i=0j1σ1(tj,ti)XiΔBi))
+j=0ng(Xj,i=0j1titi+1σ2(tj,s)Xi𝑑s)ΔBj.
Remark 4.1.

The scheme (4.1) is called the Θ-EM and the choice Θ=0 gives Euler Maruyama method in [4]

(4.3) Xn+1=Xn+hf(Xn,i=0n1σ1(tn,ti)XiΔBi)+g(Xn,i=0n1titi+1σ2(tn,s)Xi𝑑s)ΔBn.

θ=12 gives the trapezoidal solver, and θ=1 gives the implicit, or backward Euler method in [2].

Now, we examine the stability properties with respect to SVIDE (4.1) in the following result.

Theorem 4.1.

Assume that (H2) holds. Let {Xn} be the numerical solution of the θ-EM method (4.1). Then there exists a positive constant K0, which depends on σ1,σ2,T,Y0 and K, but not on h, such that

𝔼|Xn|2K0.
Proof.

For all 0tn+1T, we have

𝔼|Xn+1|2 3𝔼|X0|2+3𝔼|j=0nh[θf(Xj+1,i=0j1σ1(tj+1,ti+1)Xi+1ΔBi+1)
+(1θ)f(Xj,i=0j1σ1(tj,ti)XiΔBi)]|2
+3𝔼|j=0ng(Xj,i=0j1titi+1σ2(tj,s)Xi𝑑s)ΔBj|2
(4.4) 3𝔼|X0|2+3I1+3I2,

where

I1:= 𝔼|j=0nh[θf(Xj+1,i=1j1σ1(tj+1,ti+1)Xi+1ΔBi+1)
+(1θ)f(Xj,i=0j1σ1(tj,ti)XiΔBi)]|2,

and

I2:=𝔼|j=1ng(Xj,i=1j1titi+1σ2(tj,s)Xi𝑑s)ΔBj|2.

By Cauchy’s inequality, Itô isometry and (H2), we get

(4.5) I1= 𝔼|j=0nh[θf(Xj+1,i=0j1σ1(tj+1,ti+1)Xi+1ΔBi+1)
+(1θ)f(Xj,i=0j1σ1(tj,ti)XiΔBi)]|2
2(n+1)j=0nh2[θ2𝔼|f(Xj+1,i=0j1σ1(tj+1,ti+1)Xi+1ΔBi+1)|2
+(1θ)2𝔼|f(Xj,i=0j1σ1(tj,ti)XiΔBi)|2]
2(n+1)h22tn+1h2ThKj=0n[2+𝔼|Xj+1|2+𝔼|i=0j1σ1(tj+1,ti+1)Xi+1ΔBi+1|2
+𝔼|Xj|2+𝔼|i=0j1σ1(tj,ti)XiΔBi|2]
2ThK[2(n+1)+i=1n𝔼|Xi|2+Tσ12i=1n𝔼|Xi|2+i=1n𝔼|Xi|2+Tσ12i=1n𝔼|Xi|2
+𝔼|X0|2+Tσ12𝔼|X0|2+𝔼|Xn+1|2+Tσ12𝔼|Xn+1|2]
2ThK[2(n+1)+2C1i=1n𝔼|Xi|2+C1(𝔼|X0|2+𝔼|Xn+1|2)]
2T2K(2+C1𝔼|X0|2)+4ThKC1i=1n𝔼|Xi|2+2T2KC1𝔼|Xn+1|2.

Similar to [4].

Using Cauchy’s inequality, Minkowski inequality and (H2), we show that

I2= j=0n𝔼|g(Xj,i=0j1titi+1σ2(tj,s)Xi𝑑s)ΔBj|2
+2𝔼[0i1<i2ng(Xi1,i=0i11titi+1σ2(ti1,s)Xids)ΔBi1
×g(Xi2,i=0i21titi+1σ2(ti2,s)Xids)ΔBi2]
hKj=0n[1+𝔼|Xj|2+𝔼|i=0j1titi+1σ2(tj,s)Xi𝑑s|2]
+2𝔼[0i1<i2ngXi1(ti1)ΔBi1.gXi2(ti2)ΔBi2].

Since Xi1 is ti1-adapted, ΔBi1 is ti1+1-adapted, Xi2 is ti2-adapted, and i<j, so Xi1ΔBi1Xi2 is ti2-adapted. Namely, gXi2(ti2)ΔBi1.gXi2(ti2) is ti2-adapted, therefore,
gXi2(ti2)ΔBi1.gXi2(ti2) is independent of ΔBi2, and by ΔBi2N(0,ti2+1ti2), we obtain

𝔼[0i1<i2ngXi2(ti2)ΔBi1.gXi2(ti2)ΔBi2]=
=0i1<i2n𝔼[gXi2(ti2)ΔBi1.gXi2(ti2)ΔBi2]
=0i1<i2n𝔼[gXi2(ti2)ΔBi1.gXi2(ti2)]𝔼[ΔBi2]=0,

we have

I2 hj=0n𝔼|g(Xj,i=0j1titi+1σ2(tj,s)Xi𝑑s)|2
(4.6) hKj=0n[1+𝔼|Xj|2+𝔼|i=0j1titi+1σ2(tj,s)Xi𝑑s|2]
hK[n+1+i=0n𝔼|Xi|2+T2σ22i=0n𝔼|Xi|2]
TK+C2TK𝔼|X0|2+hC2Ki=1n𝔼|Xi|2.

Substituting (4.5) and (Proof) into (Proof),

𝔼|Xn+1|2
3𝔼|X0|2+3(2T2K(2+C1𝔼|X0|2)+4ThKC1i=1n𝔼|Xi|2+2T2KC1𝔼|Xn+1|2)
+3(TK+C2TK𝔼|X0|2+hC2Ki=1n𝔼|Xi|2)
3TK((1+4T)+(1+2TC1+C2)𝔼|X0|2+3hK(2TC1+C2)i=1n𝔼|Xi|2
+2TC1𝔼|Xn+1|2).

So only if it’s if C:=16T2KC10

𝔼|Xn+1|2
3TKC((1+4T)+(1+2TC1+C2)𝔼|X0|2+3hK(2TC1+C2)i=1n𝔼|Xi|2).

Through The discrete Gronwall’s inequality, we get

𝔼|Xn+1|2K0,

where

K0:=3TKC((1+4T)+(1+2TC1+C2)𝔼|X0|2)exp(3TKC(2TC1+C2))

Similar to [4, 3, 1, 2], the convergence order of the θ-EM method can be enhanced by including more terms in the numerical approximation.

4.2. Strong convergence of the θ-Euler Maruyama method

In order to obtain the convergence result for the θ-Euler Maruyama method (LABEL:theta_num-sul), we now introduce time continuous interpolations of the discrete numerical approximations.

Define un:=tn and Xh(s):=Xn, for s[tn,tn+1) with 0nN1.

Let t[tn,tn+1) with 0nN1 and X(t) be the continuous form of Xn with X(tn)=Xn, we obtain

(4.7) X(t) =X(tn)+tnt(θf(Xh(un+1),0uhσ1(un+1,s)Xh(s)dB(s))
+(1θ)f(Xh(un),0uhσ1(un,s)Xh(s)dB(s)))du
+tntg(Xh(un),0uhσ2(un,s)Xh(s)𝑑s)𝑑B(u)
=X(t0)+0t(θf(Xh(un+1),0uhσ1(un+1,s)Xh(s)dB(s))
+(1θ)f(Xh(un),0uhσ1(un,s)Xh(s)dB(s)))du
+0tg(Xh(un),0uhσ2(un,s)Xh(s)𝑑s)𝑑B(u).

The following theorem illustrates the convergence order of (4.1), and its proof proceeds similarly to [3] for the situation when θ=0.

Lemma 4.1.

Assume that (H2) holds. Let {Xn} be the numerical solution of the θ-Euler Maruyama method (LABEL:theta_num-sul). Then there exists a positive constant K1, which depends on σ1, σ2,K and T, but not on h, such that

𝔼|X(t)X(tn)|2K1h.
Proof.

It is easy to see that

𝔼|X(t)X(tn)|2 2𝔼|tnt(θf(Xh(un+1),0unσ1(un+1,s)Xh(s)dB(s))
+(1θ)f(Xh(un),0unσ1(un,s)Xh(s)dB(s)))du|2
+2𝔼|tntg(Xh(un),0unσ2(un,s)Xh(s)𝑑s)𝑑B(u)|2
2J1+2J2,
J1:= 𝔼|tnt(θf(Xh(un+1),0unσ1(un+1,s)Xh(s)dB(s))
+(1θ)f(Xh(un),0unσ1(un,s)Xh(s)dB(s)))du|2,

and

J2:=𝔼|tntg(Xh(un),0unσ2(un,s)Xh(s)𝑑s)𝑑B(u)|2.

By (x+y)22x2+2y2, Cauchy’s inequality, Itô isometry and (H2), we get

(4.8) J1 2htnt(θ𝔼|f(Xh(un+1),0unσ1(un+1,s)Xh(s)dB(s))|2
+(1θ)𝔼|f(Xh(un),0unσ1(un,s)Xh(s)dB(s))|2)du
2h[θ2tnt(𝔼|f(Xh(un+1),0unσ1(un+1,s)Xh(s)dB(s))|2)du
+(1θ)2tnt(𝔼|f(Xh(un),0unσ1(un,s)Xh(s)dB(s))|2)du]
2htntK[2+𝔼|Xn+1|2+𝔼|0tnσ1(un+1,s)Xh(s)dB(s)|2
+𝔼|Xn|2+𝔼|0tnσ1(un,s)Xh(s)dB(s)|2]du
2htntK[2+𝔼|Xn+1|2+σ120tn𝔼|Xh(s)|2𝑑s+𝔼|Xn|2+σ120tn𝔼|Xh(s)|2𝑑s]𝑑u
2h2K[2+𝔼|Xn+1|2+σ120tn𝔼|Xh(s)|2𝑑s+𝔼|Xn|2+σ120tn𝔼|Xh(s)|2𝑑s]
2h2K(2+2(1+σ12T)K0):=4h2K(1+C1K0).

By (H2), Cauchy’s inequality and Itô isometry, we obtain

(4.9) J2tnt𝔼|g(Xn,0tnσ2(un,s)Xh(s)ds,)|2dB(u)tntK[1+𝔼|Xn|2+𝔼|0tnσ2(un,s)Xh(s)𝑑s|2]𝑑uhK[1+𝔼|Xn|2+Tσ220tn𝔼|Xh(s)|2𝑑s]hK[1+(1+σ22T2)𝔼|Xn|2]hK(1+C2K0).

From (4.2) and (4.3) we get

𝔼|X(t)X(tn)|2K1h,

where

K1:=2K((2T+1)+(2C1T+C2)M0).
Theorem 4.2.

Suppose(H1) and σiC1(D), for i=1,2 satisfy (H3). Let X (t) and Y (t) are The numerical solution of the θ-Euler-Maruyama method and the analytical solution (LABEL:solu-EDSf), respectively. Then there exists a positive constant K2, which depends on σ1,σ2,K,K, and T, but not on h, such as

𝔼|X(t)Y(t)|2K2h.
Proof.

By (H1), Cauchy’s inequality and the Itô isometry, we have

(4.10) 𝔼|X(t)Y(t)|22L1+2L2,

where

L1:= 𝔼|0tθ(f(Y(u),0uσ1(u,s)Y(s)dB(s))
f(Xh(un+1),0uhσ1(uh,s)Xh(s)dB(s)))
+(1θ)(f(Y(u),0uσ1(u,s)Y(s)dB(s))
f(Xh(un),0unσ1(uh,s)Xh(s)dB(s)))du|2,

and

L2:=𝔼| 0tg(Y(u),0uσ2(u,s)Y(s)𝑑s)𝑑B(u)
0tg(Xh(uh),0uhσ2(uh,s)Xh(s)ds))dB(u)|2.

By Cauchy inequality and Itô isometry, we obtain

L1 T0t𝔼|θ(f(Y(u),0uσ1(u,s)Y(s)dB(s))
f(Xh(un+1),0uhσ1(un+1,s)Xh(s)dB(s)))
+(1θ)(f(Y(u),0uσ1(u,s)Y(s)dB(s))
f(Xh(un),0uhσ1(un,s)Xh(s)dB(s)))|2du
2T(θ20t𝔼|f(Y(u),0uσ1(u,s)Y(s)dB(s))
f(Xh(un+1),0uhσ1(un+1,s)Xh(s)dB(s))|2du
+(1θ)20t𝔼|f(Y(u),0uσ1(u,s)Y(s)𝑑B(s))
f(Xh(un),0uhσ1(un,s)Xh(s)dB(s))|2du).

Next, using (H1) and (x+y)2x2+y2, one has

L12KT0t[𝔼|Y(u)Xh(un+1)|2+𝔼|0uσ1(u,s)Y(s)dB(s)
0uhσ1(un+1,s)Xh(s)dB(s)|2]du
+2KT0t[𝔼|Y(u)Xh(un)|2+𝔼|0uσ1(u,s)Y(s)dB(s)
0unσ1(un,s)Xh(s)dB(s)|2]du
2KT0t[𝔼|Y(u)X(u)+X(u)Xh(un+1)|2
𝔼|uhuσ1(un+1,s)Xh(s)𝑑B(s)
+0u(σ1(u,s)Y(s)dB(s)σ1(un+1,s)Xh(s)dB(s)|2)
+𝔼|Y(u)X(u)+X(u)Xh(un)|2
+𝔼|uhuσ1(un,s)Xh(s)dB(s)+0u(σ1(u,s)Y(s)dB(s)σ1(un,s)Xh(s)dB(s)|2)]du
4KT0t[𝔼|Y(u)X(u)|2+𝔼|X(u)Xh(un+1)|2+𝔼|uhuσ1(un+1,s)Xh(s)dB(s)|2
+𝔼|0u(σ1(u,s)Y(s)σ1(un+1,s)Xh(s))𝑑B(s)|2
+𝔼|Y(u)X(u)|2+𝔼|X(u)Xh(un)|2
+𝔼|uhuσ1(un,s)Xh(s)dB(s)|2+𝔼|0u(σ1(u,s)Y(s)σ1(un,s)Xh(s))dB(s)|2]du.

By Hölder’s inequality, Itô isometry, LABEL:bond-thetam and Lemma 4.1, we have

0t𝔼|X(u)Xh(un)|2𝑑u TK1h,
0t𝔼|X(u)Xh(un+1)|2𝑑u TK1h,

and

0t𝔼|uhuσ1(un+1,s)Xh(s)𝑑B(s)|2𝑑u 0tuhuσ1(un+1,s)𝔼|Xh(s)|2𝑑s𝑑u
0tσ12hK0du
σ12hK0T,
0t𝔼|uhuσ1(un+1,s)Xh(s)𝑑B(s)|2𝑑u σ12hK0T.

By (H3), we show that

0t𝔼|0u(σ1(u,s)Y(s)σ1(un,s)Xh(s))𝑑B(s)|2𝑑u
0t0u𝔼|σ1(u,s)Y(s)σ1(un,s)Xh(s)|2𝑑s𝑑u
0t0u𝔼|σ1(u,s)Y(s)σ1(un,s)Xh(s)+σ1(u,s)Xh(s)σ1(u,s)Xh(s)|2𝑑s𝑑u
20t0u𝔼|σ1(u,s)|2𝔼|Y(s)Xh(s)|2𝑑s𝑑u
+20t0u𝔼|σ1(u,s)σ1(un,s)|2𝔼|Xh(s)|2𝑑s𝑑u
40t0u𝔼|σ1(u,s)|2𝔼|Y(s)X(s)|2𝑑s𝑑u
+40t0u𝔼|σ1(u,s)|2𝔼|X(s)Xh(s)|2𝑑s𝑑u
+20t0u𝔼|σ1(u,s)σ1(un,s)|2𝔼|Xh(s)|2𝑑s𝑑u
2K′′hK0T2+4σ12hK1T2+4Tσ120t𝔼|Y(s)X(s)|2ds.

Thus

(4.11) L1L11hT+L120u𝔼|Y(s)X(s)|2𝑑s,

where

L11:=8KT[(K0+4TK1)σ12+K1+2TK′′K0],

and

L12:=8KT[1+4Tσ12].

Using LABEL:bond-thetam, we get

L2 :=𝔼|0tg(Y(u),0uσ2(u,s)Y(s)𝑑s)𝑑B(u)
0tg(Xh(uh),0uhσ2(uh,s)Xh(s)ds))dB(u)|2.
L2 0t𝔼|g(Y(u),0uσ2(u,s)Y(s)𝑑s)g(Xh(uh),0uhσ2(uh,s)Xh(s)𝑑s)|2𝑑u
K0t[𝔼|Y(u)Xh(uh)|2+𝔼|0uσ2(u,s)Y(s)𝑑s0uhσ2(uh,s)Xh(s)𝑑s|2]𝑑u
K0t[𝔼|Y(u)X(u)+X(u)Xh(uh)|2+𝔼|0uσ2(u,s)Y(s)ds
0uhσ2(uh,s)Xh(s)ds|2]du
2KT0t[𝔼|X(u)Xh(uh)|2+𝔼|Y(u)X(u)|2
+𝔼|uhuσ2(uh,s)Xh(s)ds|2+𝔼|0uσ2(u,s)Y(s)ds0uσ2(uh,s)Xh(s)ds|2]du
2KT0t[𝔼|X(u)Xh(uh)|2+𝔼|Y(u)X(u)|2
+𝔼|uhuσ2(uh,s)Xh(s)ds|2+𝔼|0u(σ2(u,s)Y(s)σ2(uh,s)Xh(s))ds|2]du.

By Cauchy’s inequality, Itô isometry, LABEL:bond-thetam and Lemma 4.1,

0t𝔼|uhuσ2(uh,s)Xh(s)𝑑s|2𝑑u 0t[uhu|σ2(uh,s)|2𝑑suhu𝔼|Xh(s)|2𝑑s]𝑑u
0t[uhuσ22𝑑suhuK0𝑑s]𝑑u
h2Tσ22K0,

by Cauchy’s inequality, LABEL:bond-thetam, Lemma 4.1 and (x+y+z)22x2+2y2+2z2, we can obtain

0t [𝔼|0u(σ2(uh,s)Xh(s)σ2(u,s)Y(s))𝑑s|2]du
=0t[𝔼|0u(σ2(uh,s)Xh(s)σ2(u,s)[Y(s)Xh(s)+Xh(s)X(s)+X(s)])𝑑s|2]𝑑u
20t0u|(σ2(uh,s)σ2(u,s)|2𝔼|Xh(s)|2dsdu
+20t0u|σ2(u,s)|2𝔼|Y(s)X(s)|2𝑑s𝑑u
+20t0u|σ2(u,s)|2𝔼|X(s)Xh(s)|2𝑑s𝑑u
20t0uK′′h2K0𝑑s𝑑u+20t0uσ22𝔼|Y(s)X(s)|2𝑑s𝑑u
+20t0uσ22K1h𝑑s𝑑u
2T2h2K′′K0+2T2hσ22K1+2Tσ220u𝔼|Y(s)X(s)|2𝑑s,

Thus

(4.12) L2L21hT+L220u𝔼|Y(s)X(s)|2𝑑s,

where

L21:=2KT[(hK0+2TK1)σ22T+2ThK0K′′],

and

L22:=2KT[1+2Tσ22].

By compensating (4.5) and (4.6) in (4.4), we have

𝔼|X(t)Y(t)|22(L11+L21)hT+2(L12+L22)0u𝔼|Y(s)X(s)|2𝑑s,

L11,L12,L21 and L22 knew it before. By Gronwall’s inequality, we have

𝔼|X(t)Y(t)|2K2h,

where

K2:=2T(L11+L21)exp(2(L12+L22)T).

We now give a number of numerical experiments that support the theoretical predictions made in the earlier sections about the class SVIDEs is convergent of order 1/2.

5. Numerical experiments

In this section, we present two numerical examples to verify the theoretical results, we use discrete Brownian paths over [0,1] with Δt=210. Let Yhi(T) represent the numerical solution of the θ-Euler-Maruyama method along the i sample path at t=T with step size h[23,24,25,26]. We take the numerical solution YΔi(T) to be an approximation of the analytic solution and compare this with the numerical approximation over M=1000 sample paths. The mean-square error is

Errorh:=(1Mi=1M|Yhi(T)YΔti(T)|2)1/2.

while the strong convergence order is defined numerically by

Order=log2ErrorhErrorh/2.

Consider the following stochastic Volterra integro-diferential equation:

(5.15) dY(t)= (Y(t)+acos(0tσ1(t,s)Y(s)𝑑B(s)))dt
+(Y(t)+bsin(g(Y(t),0tσ2(t,s)Y(s)𝑑s)))dB(t),

with initial data Y(0)=1 and functions f(x,y)=x+acos(y), g(x,y)=x+bsin(y).

Now, we present the following examples:

Example 5.1.

In equation (5.15), we take a=1,b=1, σ1(t,s)=sin(2ts),σ2=(t,s)=ts+1.

Table 1 presents a comparison between the θ- Euler-Maruyama technique and the Euler-Maruyama technique for the average values of the mean square error and the values of the strong convergence order. Additionally, curves Fig. 1(a) and Fig. 1(b) are displayed.

Stepsize Euler-Maruyama method θ-Euler-Maruyama method
—————————— —————————————————————–
θ=0.25 θ=0.5 θ=0.75
Error Order Error Order Error Order Error Order
23Δt 0.36491 0.31615 0.24880 0.28555
24Δt 0.51287 0.49104 0.43569 0.46267 0.39184 0.45652 0.35430 0.50998
25Δt 0.72441 0.49822 0.62635 0.52368 0.54196 0.46794 0.49710 0.48856
26Δt 1.08861 0.58761 0.94770 0.59744 0.81481 0.58826 0.73190 0.55811
Table 1. The Means square errors and Strong convergence order of the Euler-Maruyama and θ-Euler-Maruyama methods with θ[0.25,0.5,0.75] for Example 5.1.
Refer to caption
(a) Mean square error of the Euler-Maruyama and θ-Euler-Maruyama method with θ[0.25,0.5,0.75] for Example 5.1.
Refer to caption
(b) Strong convergence order of the Euler-Maruyama and θ-Euler-Maruyama method with θ[0,0.25,0.5,0.75] for Example 5.1.

The curves Fig. 1(a) and Fig. 1(b) showing the mean square error and strong convergence order curves of the Euler-Maruyama and θ-EM methods with θ=0.25,0.5,0.75, respectively, and based on the results presented in Table 1.

In Fig. 2, we have presented the solution curves. The blue curve shows the approximation of the analytic solution using the Euler-Maruyama method, the red curve shows the numerical solution of the Euler-Maruyama method, and the green curve shows the numerical solution of the θ-Euler-Maruyama method by changing the value of θ(θ=0, 0.25, 0.5 and 0.75), in proportion to Example 5.1.

Refer to caption
Figure 2. Approximate solution and numerical solutions by the EM and θ-EM method with h=23.Δt for the Example 5.1.
Example 5.2.

In equation (5.15), we take a=0.5,b=0.2, σ1(t,s)=sin(2sts),σ2=(t,s)=cos(t2s+1).

As with the previous example, we provided the main result of the second example by using the same approach. Table 2, corresponding to Example 5.2, presents the mean square error and the strong convergence results of the Euler-Maruyama and θ-Euler-Maruyama methods, and also reinforces the results obtained in Example 5.1.

Stepsize Euler-Maruyama method θ-Euler-Maruyama method
—————————— —————————————————————–
θ=0.25 θ=0.5 θ=0.75
Error Order Error Order Error Order Error Order
23Δt 0,28632 0.23911 0,22111 0,19686
24Δt 0,40390 0,49637 0.34838 0.54296 0,30660 0,47154 0,28122 0,51449
25Δt 0,57300 0,50453 0.48685 0.48279 0,43328 0,49895 0,39785 0,50051
26Δt 0,87765 0,61508 0.71764 0.55977 0,66639 0,62105 0,61200 0,62130
Table 2. The Means square errors and Strong convergence order of the Euler-Maruyama and θ-Euler-Maruyama methods with θ[0.25,0.5,0.75].
Refer to caption
(a) Mean square error of the Euler-Maruyama
and θ-Euler-Maruyama methods with θ[0,0.25,0.5,0.75] for the Example 5.2.
Refer to caption
(b) Strong convergence order of the Euler-Maruyama and θ-Euler-Maruyama methods with θ[0,0.25,0.5,0.75] for the Example 5.2.

In Fig. 4, the solution curves are depicted. The blue curve represents the approximation of the analytical solution using the Euler-Maruyama method, the red curve represents the numerical solution of the Euler-Maruyama method, and the green curve represents the numerical solution of the θ-Euler-Maruyama method.

Refer to caption
Figure 4. Approximate solution and numerical solutions by the EM and θ-EM method with h=23Δt for the Example 5.2.

The strong convergence results of the θ-Euler-Maruyama method of stochastic Volterra integro-differential equations in Example 5.1 and Example 5.2 are shown in Table 1 and Table 2. From these tables, we can see that the θ-Euler-Maruyama method of the stochastic Volterra integro-differential equations (SVIDEs) is convergent of order 1/2.

6. Conclusion

In this paper, we examined a numerical solutions of a class of stochastic Volterra integro-differential equations. We investigated the existence, uniqueness, and Hölder continuity of the theoretical solution. Additionally, we considered the Euler-Maruyama (EM) and θ-EM methods for solving the SVIDEs, analyzing their mean-square error. Moreover, we established that the EM and θ-EM approximate solutions are strongly convergent with order around 1/2. Numerical examples have been provided to illustrate the effectiveness of the theoretical results obtained in this paper. As a result, we have shown that the θ-EM method is more efficient than the EM method for the numerical approximation of the solution of a stochastic Volterra integro-differential equations for different values of θ(θ=0.25,0.5,0.75).

References

  • [1] M. Bayram, T. Partal, and G. Orucova Buyukoz (2018) Numerical methods for simulation of stochastic differential equations. Advances in Difference Equations 2018, pp. 1–10. Cited by: §1.
  • [2] M. Bayram, T. Partal, and G. Orucova Buyukoz (2018) Numerical methods for simulation of stochastic differential equations. Advances in Difference Equations 2018, pp. 1–10. External Links: Link Cited by: §1.
  • [3] D. Conte, R. D’Ambrosio, B. Paternoster, et al. (2018) On the stability of ϑ-methods for stochastic volterra integral equations. Discr. Cont. Dyn. Sys.-Series B 23 (7), pp. 2695–2708. External Links: Link Cited by: §1.
  • [4] C. Deng and W. Liu (2020) Semi-implicit euler–maruyama method for non-linear time-changed stochastic differential equations. BIT Numerical Mathematics 60, pp. 1133–1151. External Links: Link Cited by: §1, §1, §1.
  • [5] S. Federico, G. Ferrari, and L. Regis (2020) Applications of stochastic optimal control to economics and finance. Basel: MDPI. External Links: Link Cited by: §1.
  • [6] D. J. Higham (2000) Mean-square and asymptotic stability of the stochastic theta method. SIAM journal on numerical analysis 38 (3), pp. 753–769. External Links: Link Cited by: §1.
  • [7] G. Lan, M. Zhao, and S. Qi (2022) Exponential stability of θ-em method for nonlinear stochastic volterra integro-differential equations. Applied Numerical Mathematics 172, pp. 279–291. External Links: Link Cited by: §1, §1, §1.
  • [8] M. Li, C. Huang, and Y. Hu (2022) Numerical methods for stochastic volterra integral equations with weakly singular kernels. IMA Journal of Numerical Analysis 42 (3), pp. 2656–2683. External Links: Link Cited by: §1.
  • [9] X. Mao and L. Szpruch (2013) Strong convergence and stability of implicit numerical methods for stochastic differential equations with non-globally lipschitz continuous coefficients. Journal of Computational and Applied Mathematics 238, pp. 14–28. Cited by: §1.
  • [10] X. Mao (2007) Stochastic differential equations and applications. Elsevier. Cited by: §1.
  • [11] K. Nouri, H. Ranjbar, and L. Torkzadeh (2019) Modified stochastic theta methods by odes solvers for stochastic differential equations. Communications in Nonlinear Science and Numerical Simulation 68, pp. 336–346. Cited by: §1.
  • [12] T. Sauer (2013) Computational solution of stochastic differential equations. Wiley Interdisciplinary Reviews: Computational Statistics 5 (5), pp. 362–371. External Links: Link Cited by: §1.
  • [13] W. Wang, L. Yan, S. Gao, and J. Hu (2021) The truncated theta-em method for nonlinear and nonautonomous hybrid stochastic differential delay equations with poisson jumps. Discrete Dynamics in Nature and Society 2021, pp. 1–17. External Links: Link Cited by: §1.
  • [14] X. Wang, J. Wu, and B. Dong (2020) Mean-square convergence rates of stochastic theta methods for sdes under a coupled monotonicity condition. BIT Numerical Mathematics 60 (3), pp. 759–790. External Links: Link Cited by: §1, §1.
  • [15] Z. Wang, Y. Gao, C. Fang, L. Liu, H. Zhou, and H. Zhang (2020) Optimal control design for connected cruise control with stochastic communication delays. IEEE transactions on vehicular technology 69 (12), pp. 15357–15369. External Links: Link Cited by: §1.
  • [16] H. Yang and F. Jiang (2014) Stochastic θ-methods for a class of jump-diffusion stochastic pantograph equations with random magnitude. The Scientific World Journal 2014. External Links: Link Cited by: §1.
  • [17] W. Zhang, H. Liang, and J. Gao (2020) Theoretical and numerical analysis of the euler–maruyama method for generalized stochastic volterra integro-differential equations. Journal of Computational and Applied Mathematics 365, pp. 112364. External Links: Link Cited by: §1, §1, §1, §2.
  • [18] X. Zong, F. Wu, and G. Xu (2018) Convergence and stability of two classes of theta-milstein schemes for stochastic differential equations. Journal of Computational and Applied Mathematics 336, pp. 8–29. External Links: Link Cited by: §1.