A numerical method for the solution of an autonomous initial value problem

Abstract

Using a known interpolation formula we introduce a class of numerical methods for approximating the solutions of scalar initial value problems for first order differential equations, which can be identified as explicit Runge-Kutta methods. We determine bounds for the local truncation error and we also compare the convergence order and the stability region with those for explicit Runge-Kutta methods, which have convergence order equal with number of stages (i.e. with 2, 3 and 4 stages). The convergence order is only two, but our methods have a larger absolute stability region than the above mentioned methods. In the last section a numerical example is provided, and the obtained numerical approximation is compared with the corresponding exact solution.

Authors

Flavius Patrulescu
(Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy)

Keywords

initial value problem; stability region; convergence order; local truncation error

Cite this paper as

F. Pătrulescu, A numerical method for the solution of an autonomous initial value problem, Carpathian J. Math. vol. 28 (2012), pp. 289-296

PDF

About this paper

Journal

Carpathian Journal of Mathematics

Publisher Name

North University of Baia Mare, Department of Mathematics and Computer Science, Baia Mare

Print ISSN

1584-2851

Online ISSN

1843-4401

MR

3027258

ZBL

1289.65153

Google Scholar

[1] Crouzeix, A.L. Mignot, Analyse numerique des equations differentielles, Masson, Paris (1989).
[2] D. Lambert, Numerical Methods for Ordinary Differential Systems. The Initial Value Problem, John Wiley & Sons, Chichester (1990).
[3] Pavaloiu, On an approximation formula, Rev. Anal. Numer. Theor. Approx.,  26, no. 1-2 (1997), 179-183.
[4] Ralston, Runge-Kutta Methods with Minimum Error Bounds, Math. Comp.,16, no. 80 (1962), 431-437.
[5] F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, Inc., Englewood Cliffs, N.J. (1964).

Paper (preprint) in HTML form

2012_Patrulescu_CJM_Numerical_methods_solution_IVP

A numerical method for the solution of an autonomous initial value problem

Flavius Pătrulescu 1 1 ^(1){ }^{1}1, 1 1 ^(1){ }^{1}1 Tiberiu Popoviciu Institute of Numerical AnalysisP.O. Box 68-1, 400110 Cluj-Napoca, Romania

Abstract

Using a known interpolation formula we introduce a class of numerical methods for approximating the solutions of scalar initial value problems for first order differential equations, which can be identified as explicit Runge-Kutta methods. We determine bounds for the local truncation error and we also compare the convergence order and the stability region with those for explicit Runge-Kutta methods, which have convergence order equal with number of stages (i.e. with 2,3 and 4 stages). The convergence order is only two, but our methods have a larger absolute stability region than the above mentioned methods. In the last section a numerical example is provided, and the obtained numerical approximation is compared with the corresponding exact solution.

2010 Mathematics Subject Classification: 65L05, 65L06.
Keywords: initial value problem, stability region, convergence order, local truncation error

1 Introduction

Consider a scalar initial value problem (IVP):
(1.1) { y = f ( x , y ) , x I y ( x 0 ) = y 0 , (1.1) y = f ( x , y ) , x I y x 0 = y 0 , {:(1.1){[y^(')=f(x","y)","quad x in I],[y(x_(0))=y_(0)","]:}:}\left\{\begin{array}{l} y^{\prime}=f(x, y), \quad x \in I \tag{1.1}\\ y\left(x_{0}\right)=y_{0}, \end{array}\right.(1.1){y=f(x,y),xIy(x0)=y0,
where: I R , y 0 R , f : I × R R I R , y 0 R , f : I × R R I subeR,y_(0)inR,f:I xxRrarrRI \subseteq \mathbb{R}, y_{0} \in \mathbb{R}, f: I \times \mathbb{R} \rightarrow \mathbb{R}IR,y0R,f:I×RR and x 0 I x 0 I x_(0)in Ix_{0} \in Ix0I.
We assume that the function f f fff satisfies all requirements necessary to insure the existence of a unique solution y y yyy on the finite interval I = [ x 0 , x 0 + T ] , 0 < T < I = x 0 , x 0 + T , 0 < T < I=[x_(0),x_(0)+T],0 < T < ooI=\left[x_{0}, x_{0}+T\right], 0<T<\inftyI=[x0,x0+T],0<T<, see [1] for details.
In this paper we give a method to approximate the solution of the above initial value problem using an approximation formula for functions given in [3] and [5].
One denotes by I T I T I_(T)I_{T}IT the closed interval determined by the two distinct points x 0 x 0 x_(0)x_{0}x0, x 0 + T R x 0 + T R x_(0)+T inRx_{0}+T \in \mathbb{R}x0+TR, where T R T R T inRT \in \mathbb{R}TR. For a 3-times derivable function h : I T R h : I T R h:I_(T)rarrRh: I_{T} \rightarrow \mathbb{R}h:ITR consider the function g : I T R g : I T R g:I_(T)rarrRg: I_{T} \rightarrow \mathbb{R}g:ITR given by:
(1.2) g ( x ) = h ( x 0 ) + ( x x 0 ) h ( x 0 + 1 2 ( x x 0 ) ) , x I T . (1.2) g ( x ) = h x 0 + x x 0 h x 0 + 1 2 x x 0 , x I T . {:(1.2)g(x)=h(x_(0))+(x-x_(0))h^(')(x_(0)+(1)/(2)(x-x_(0)))","quad x inI_(T).:}\begin{equation*} g(x)=h\left(x_{0}\right)+\left(x-x_{0}\right) h^{\prime}\left(x_{0}+\frac{1}{2}\left(x-x_{0}\right)\right), \quad x \in I_{T} . \tag{1.2} \end{equation*}(1.2)g(x)=h(x0)+(xx0)h(x0+12(xx0)),xIT.
In 3 is showed that function g g ggg verifies:
(1.3) h ( i ) ( x 0 ) = g ( i ) ( x 0 ) , i = 0 , 2 and | h ( x ) g ( x ) | 7 24 M 3 | x x 0 | 3 , x I T , (1.3) h ( i ) x 0 = g ( i ) x 0 , i = 0 , 2 ¯  and  | h ( x ) g ( x ) | 7 24 M 3 x x 0 3 , x I T , {:(1.3)h^((i))(x_(0))=g^((i))(x_(0))","quad i= bar(0,2)" and "|h(x)-g(x)| <= (7)/(24)M_(3)|x-x_(0)|^(3)","quad x inI_(T)",":}\begin{equation*} h^{(i)}\left(x_{0}\right)=g^{(i)}\left(x_{0}\right), \quad i=\overline{0,2} \text { and }|h(x)-g(x)| \leq \frac{7}{24} M_{3}\left|x-x_{0}\right|^{3}, \quad x \in I_{T}, \tag{1.3} \end{equation*}(1.3)h(i)(x0)=g(i)(x0),i=0,2 and |h(x)g(x)|724M3|xx0|3,xIT,
where M 3 = sup x I T | h ( 3 ) ( x ) | M 3 = sup x I T h ( 3 ) ( x ) M_(3)=s u p_(x inI_(T))|h^((3))(x)|M_{3}=\sup _{x \in I_{T}}\left|h^{(3)}(x)\right|M3=supxIT|h(3)(x)|.
In the next sections, for simplicity, we consider only the autonomous case, i.e. f = f ( y ) f = f ( y ) f=f(y)f=f(y)f=f(y), the general case can be treated in the same manner.
The paper is structured as follows. In Section 2 we describe the numerical method together with its versions corresponding to the particular cases of the parameter. In Section 3 we study the convergence of the method and in Section 4 we provide its stability analysis. Two particular cases are investigated in Section 5 and, finally, a numerical example is presented in Section 6.

2 The numerical method

We suppose that the exact solution y y yyy of initial value problem (1.1) is 3 -times differentiable (conditions for regularity of exact solution of an initial value problem can be found in [1]). Using the results described above we deduce that there exists an approximation y ~ y ~ tilde(y)\tilde{y}y~ given by
(2.1) y ~ ( x ) = y ( x 0 ) + ( x x 0 ) y ( x 0 + 1 2 ( x x 0 ) ) = = y ( x 0 ) + ( x x 0 ) f ( y ( x 0 + 1 2 ( x x 0 ) ) ) , (2.1) y ~ ( x ) = y x 0 + x x 0 y x 0 + 1 2 x x 0 = = y x 0 + x x 0 f y x 0 + 1 2 x x 0 , {:[(2.1) tilde(y)(x)=y(x_(0))+(x-x_(0))y^(')(x_(0)+(1)/(2)(x-x_(0)))=],[=y(x_(0))+(x-x_(0))f(y(x_(0)+(1)/(2)(x-x_(0))))","]:}\begin{align*} \tilde{y}(x) & =y\left(x_{0}\right)+\left(x-x_{0}\right) y^{\prime}\left(x_{0}+\frac{1}{2}\left(x-x_{0}\right)\right)= \tag{2.1}\\ & =y\left(x_{0}\right)+\left(x-x_{0}\right) f\left(y\left(x_{0}+\frac{1}{2}\left(x-x_{0}\right)\right)\right), \end{align*}(2.1)y~(x)=y(x0)+(xx0)y(x0+12(xx0))==y(x0)+(xx0)f(y(x0+12(xx0))),
for all x [ x 0 , x 0 + T ] x x 0 , x 0 + T x in[x_(0),x_(0)+T]x \in\left[x_{0}, x_{0}+T\right]x[x0,x0+T]. From (1.3) this approximation verifies
y ~ ( i ) ( x 0 ) = y ( i ) ( x 0 ) , i = 0 , 2 , and | y ~ ( x ) y ( x ) | < 7 24 M 3 | x x 0 | 3 , x I , y ~ ( i ) x 0 = y ( i ) x 0 , i = 0 , 2 ¯ ,  and  | y ~ ( x ) y ( x ) | < 7 24 M 3 x x 0 3 , x I , tilde(y)^((i))(x_(0))=y^((i))(x_(0)),quad i= bar(0,2)," and "| tilde(y)(x)-y(x)| < (7)/(24)M_(3)|x-x_(0)|^(3),quad x in I,\tilde{y}^{(i)}\left(x_{0}\right)=y^{(i)}\left(x_{0}\right), \quad i=\overline{0,2}, \text { and }|\tilde{y}(x)-y(x)|<\frac{7}{24} M_{3}\left|x-x_{0}\right|^{3}, \quad x \in I,y~(i)(x0)=y(i)(x0),i=0,2, and |y~(x)y(x)|<724M3|xx0|3,xI,
where M 3 = sup x I | y ( 3 ) ( x ) | M 3 = sup x I y ( 3 ) ( x ) M_(3)=s u p_(x in I)|y^((3))(x)|M_{3}=\sup _{x \in I}\left|y^{(3)}(x)\right|M3=supxI|y(3)(x)|.
Repeating (1.2), the unknown quantity y ( x 0 + 1 2 ( x x 0 ) ) y x 0 + 1 2 x x 0 y(x_(0)+(1)/(2)(x-x_(0)))y\left(x_{0}+\frac{1}{2}\left(x-x_{0}\right)\right)y(x0+12(xx0)) in (2.1) can be approximated in the same manner and we obtain
y ( x 0 + 1 2 ( x x 0 ) ) = y ( x 0 ) + 1 2 ( x x 0 ) y ( x 0 + 1 2 2 ( x x 0 ) ) = = y ( x 0 ) + 1 2 ( x x 0 ) f ( y ( x 0 + 1 2 2 ( x x 0 ) ) ) y x 0 + 1 2 x x 0 = y x 0 + 1 2 x x 0 y x 0 + 1 2 2 x x 0 = = y x 0 + 1 2 x x 0 f y x 0 + 1 2 2 x x 0 {:[y(x_(0)+(1)/(2)(x-x_(0)))=y(x_(0))+(1)/(2)(x-x_(0))y^(')(x_(0)+(1)/(2^(2))(x-x_(0)))=],[=y(x_(0))+(1)/(2)(x-x_(0))f(y(x_(0)+(1)/(2^(2))(x-x_(0))))]:}\begin{aligned} y\left(x_{0}+\frac{1}{2}\left(x-x_{0}\right)\right) & =y\left(x_{0}\right)+\frac{1}{2}\left(x-x_{0}\right) y^{\prime}\left(x_{0}+\frac{1}{2^{2}}\left(x-x_{0}\right)\right)= \\ & =y\left(x_{0}\right)+\frac{1}{2}\left(x-x_{0}\right) f\left(y\left(x_{0}+\frac{1}{2^{2}}\left(x-x_{0}\right)\right)\right) \end{aligned}y(x0+12(xx0))=y(x0)+12(xx0)y(x0+122(xx0))==y(x0)+12(xx0)f(y(x0+122(xx0)))
Continuing this procedure for y ( x 0 + 1 2 2 ( x x 0 ) ) y x 0 + 1 2 2 x x 0 y(x_(0)+(1)/(2^(2))(x-x_(0)))y\left(x_{0}+\frac{1}{2^{2}}\left(x-x_{0}\right)\right)y(x0+122(xx0)) and for the next unknown values of y y yyy, after p p ppp steps, we obtain:
y ~ ( x ) = y ( x 0 ) + ( x x 0 ) f ( y ( x 0 ) + 1 2 ( x x 0 ) f ( y ( x 0 ) + 1 2 2 ( x x 0 ) f ( y ( x 0 ) + (2.2) + + 1 2 p 1 ( x x 0 ) f ( y ( x 0 + 1 2 p ( x x 0 ) ) ) y ~ ( x ) = y x 0 + x x 0 f y x 0 + 1 2 x x 0 f y x 0 + 1 2 2 x x 0 f y x 0 + (2.2) + + 1 2 p 1 x x 0 f y x 0 + 1 2 p x x 0 {:[ tilde(y)(x)=y(x_(0))+(x-x_(0))f(y(x_(0))+(1)/(2)(x-x_(0))f(y(x_(0))+(1)/(2^(2))(x-x_(0))f(y(x_(0))+:}],[(2.2)+dots+(1)/(2^(p-1))(x-x_(0))f(y(x_(0)+(1)/(2^(p))(x-x_(0)))dots)]:}\begin{align*} \tilde{y}(x) & =y\left(x_{0}\right)+\left(x-x_{0}\right) f\left(y\left(x_{0}\right)+\frac{1}{2}\left(x-x_{0}\right) f\left(y\left(x_{0}\right)+\frac{1}{2^{2}}\left(x-x_{0}\right) f\left(y\left(x_{0}\right)+\right.\right.\right. \\ & +\ldots+\frac{1}{2^{p-1}}\left(x-x_{0}\right) f\left(y\left(x_{0}+\frac{1}{2^{p}}\left(x-x_{0}\right)\right) \ldots\right) \tag{2.2} \end{align*}y~(x)=y(x0)+(xx0)f(y(x0)+12(xx0)f(y(x0)+122(xx0)f(y(x0)+(2.2)++12p1(xx0)f(y(x0+12p(xx0)))
We can write (2.2) in a more suitable form using the recurrence:
(2.3) u 0 ( x ) = y ( x 0 + 1 2 p ( x x 0 ) ) u 1 ( x ) = y ( x 0 ) + 1 2 p 1 ( x x 0 ) f ( u 0 ( x ) ) u p ( x ) = y ( x 0 ) + ( x x 0 ) f ( u p 1 ( x ) ) (2.3) u 0 ( x ) = y x 0 + 1 2 p x x 0 u 1 ( x ) = y x 0 + 1 2 p 1 x x 0 f u 0 ( x ) u p ( x ) = y x 0 + x x 0 f u p 1 ( x ) {:[(2.3)u_(0)(x)=y(x_(0)+(1)/(2^(p))(x-x_(0)))],[u_(1)(x)=y(x_(0))+(1)/(2^(p-1))(x-x_(0))f(u_(0)(x))],[dots dotsquad cdotsquad cdots],[u_(p)(x)=y(x_(0))+(x-x_(0))f(u_(p-1)(x))]:}\begin{align*} u_{0}(x) & =y\left(x_{0}+\frac{1}{2^{p}}\left(x-x_{0}\right)\right) \tag{2.3}\\ u_{1}(x) & =y\left(x_{0}\right)+\frac{1}{2^{p-1}}\left(x-x_{0}\right) f\left(u_{0}(x)\right) \\ \ldots & \ldots \quad \cdots \quad \cdots \\ u_{p}(x) & =y\left(x_{0}\right)+\left(x-x_{0}\right) f\left(u_{p-1}(x)\right) \end{align*}(2.3)u0(x)=y(x0+12p(xx0))u1(x)=y(x0)+12p1(xx0)f(u0(x))up(x)=y(x0)+(xx0)f(up1(x))
and we can define y ~ ( x ) = u p ( x ) , x [ x 0 , x 0 + T ] y ~ ( x ) = u p ( x ) , x x 0 , x 0 + T tilde(y)(x)=u_(p)(x),x in[x_(0),x_(0)+T]\tilde{y}(x)=u_{p}(x), x \in\left[x_{0}, x_{0}+T\right]y~(x)=up(x),x[x0,x0+T].
Taking into account that 1 2 p ( x x 0 ) 0 1 2 p x x 0 0 (1)/(2^(p))(x-x_(0))rarr0\frac{1}{2^{p}}\left(x-x_{0}\right) \rightarrow 012p(xx0)0, as p p p rarr oop \rightarrow \inftyp, we can choose p = p 0 p = p 0 p=p_(0)p=p_{0}p=p0 such that y ( x 0 ) y x 0 y(x_(0))y\left(x_{0}\right)y(x0) approximates y ( x 0 + 1 2 p 0 ( x x 0 ) ) y x 0 + 1 2 p 0 x x 0 y(x_(0)+(1)/(2^(p_(0)))(x-x_(0)))y\left(x_{0}+\frac{1}{2^{p_{0}}}\left(x-x_{0}\right)\right)y(x0+12p0(xx0)) with any given accuracy since y y yyy is a continuous function.
For p = p 0 p = p 0 p=p_(0)p=p_{0}p=p0 we obtain
(2.4) u ~ 0 ( x ) = y ( x 0 ) , u ~ i ( x ) = y ( x 0 ) + 1 2 p 0 i ( x x 0 ) f ( u ~ i 1 ( x ) ) , i = 1 , , p 0 , (2.4) u ~ 0 ( x ) = y x 0 , u ~ i ( x ) = y x 0 + 1 2 p 0 i x x 0 f u ~ i 1 ( x ) , i = 1 , , p 0 , {:(2.4) tilde(u)_(0)(x)=y(x_(0))","quad tilde(u)_(i)(x)=y(x_(0))+(1)/(2^(p_(0)-i))(x-x_(0))f( tilde(u)_(i-1)(x))","quad i=1","dots","p_(0)",":}\begin{equation*} \tilde{u}_{0}(x)=y\left(x_{0}\right), \quad \tilde{u}_{i}(x)=y\left(x_{0}\right)+\frac{1}{2^{p_{0}-i}}\left(x-x_{0}\right) f\left(\tilde{u}_{i-1}(x)\right), \quad i=1, \ldots, p_{0}, \tag{2.4} \end{equation*}(2.4)u~0(x)=y(x0),u~i(x)=y(x0)+12p0i(xx0)f(u~i1(x)),i=1,,p0,
and we use this recurrence relation to construct a numerical method for a scalar initial value problem. The continuous interval [ x 0 , x 0 + T ] x 0 , x 0 + T [x_(0),x_(0)+T]\left[x_{0}, x_{0}+T\right][x0,x0+T] is partitioned by the point set
(2.5) x 0 < x 1 < < x N = x 0 + T . (2.5) x 0 < x 1 < < x N = x 0 + T . {:(2.5)x_(0) < x_(1) < dots < x_(N)=x_(0)+T.:}\begin{equation*} x_{0}<x_{1}<\ldots<x_{N}=x_{0}+T . \tag{2.5} \end{equation*}(2.5)x0<x1<<xN=x0+T.
Replacing x x xxx by x 1 x 1 x_(1)x_{1}x1 in (2.4) we obtain an approximation u ~ p 0 ( x 1 ) u ~ p 0 x 1 tilde(u)_(p_(0))(x_(1))\tilde{u}_{p_{0}}\left(x_{1}\right)u~p0(x1) for the exact value y ( x 1 ) y x 1 y(x_(1))y\left(x_{1}\right)y(x1). We denote this approximation by y 1 y 1 y_(1)y_{1}y1 and we shall apply the algorithm (2.4) for the point x 2 x 2 x_(2)x_{2}x2 but instead of y ( x 0 ) = y 0 y x 0 = y 0 y(x_(0))=y_(0)y\left(x_{0}\right)=y_{0}y(x0)=y0 we consider the value y 1 y 1 y_(1)y_{1}y1 previously computed. By repeating this procedure we obtain for each x s , s = 1 , N x s , s = 1 , N ¯ x_(s),s= bar(1,N)x_{s}, s=\overline{1, N}xs,s=1,N an algorithm that can be described in the following way:
(2.6) y s + 1 = u p 0 s , s = 1 , N 1 (2.6) y s + 1 = u p 0 s , s = 1 , N 1 ¯ {:(2.6)y_(s+1)=u_(p_(0))^(s)","quad s= bar(1,N-1):}\begin{equation*} y_{s+1}=u_{p_{0}}^{s}, \quad s=\overline{1, N-1} \tag{2.6} \end{equation*}(2.6)ys+1=up0s,s=1,N1
where u p 0 s u p 0 s u_(p_(0))^(s)u_{p_{0}}^{s}up0s is obtained by the following method
(2.7) u 0 s = y s , u i s = y s + 1 2 p 0 i ( x s + 1 x s ) f ( u i 1 s ) , i = 1 , p 0 (2.7) u 0 s = y s , u i s = y s + 1 2 p 0 i x s + 1 x s f u i 1 s , i = 1 , p 0 ¯ {:(2.7)u_(0)^(s)=y_(s)","quadu_(i)^(s)=y_(s)+(1)/(2^(p_(0)-i))(x_(s+1)-x_(s))f(u_(i-1)^(s))","quad i= bar(1,p_(0)):}\begin{equation*} u_{0}^{s}=y_{s}, \quad u_{i}^{s}=y_{s}+\frac{1}{2^{p_{0}-i}}\left(x_{s+1}-x_{s}\right) f\left(u_{i-1}^{s}\right), \quad i=\overline{1, p_{0}} \tag{2.7} \end{equation*}(2.7)u0s=ys,uis=ys+12p0i(xs+1xs)f(ui1s),i=1,p0
and y s y s y_(s)y_{s}ys represents the exact value or an approximation of y y yyy at x = x s x = x s x=x_(s)x=x_{s}x=xs.
Thus, we obtain a numerical method given by
(2.8) y s + 1 = y s + h s f ( y s + 1 2 h s f ( y s + 1 2 2 h s f ( y s + + 1 2 p 0 1 h s f ( y s ) ) ) , (2.8) y s + 1 = y s + h s f y s + 1 2 h s f y s + 1 2 2 h s f y s + + 1 2 p 0 1 h s f y s , {:(2.8)y_(s+1)=y_(s)+h_(s)f(y_(s)+(1)/(2)h_(s)f(y_(s)+(1)/(2^(2))h_(s)f(y_(s)+dots+(1)/(2^(p_(0)-1))h_(s)f(y_(s)))dots),:}:}\begin{equation*} y_{s+1}=y_{s}+h_{s} f\left(y_{s}+\frac{1}{2} h_{s} f\left(y_{s}+\frac{1}{2^{2}} h_{s} f\left(y_{s}+\ldots+\frac{1}{2^{p_{0}-1}} h_{s} f\left(y_{s}\right)\right) \ldots\right),\right. \tag{2.8} \end{equation*}(2.8)ys+1=ys+hsf(ys+12hsf(ys+122hsf(ys++12p01hsf(ys))),
where h s = x s + 1 x s , s = 0 , , N 1 h s = x s + 1 x s , s = 0 , , N 1 h_(s)=x_(s+1)-x_(s),s=0,dots,N-1h_{s}=x_{s+1}-x_{s}, s=0, \ldots, N-1hs=xs+1xs,s=0,,N1 represents the length of the step.
We have the following equivalence result.
Theorem 2.1 The method (2.8) is equivalent with a p 0 p 0 p_(0)p_{0}p0 stages explicit Runge-Kutta method with the Butcher array given by:
0
1 2 p 0 1 1 2 p 0 1 (1)/(2^(p_(0)-1))\frac{1}{2^{p_{0}-1}}12p01 1 2 p 0 1 1 2 p 0 1 (1)/(2^(p_(0)-1))\frac{1}{2^{p_{0}-1}}12p01 0
1 2 p 0 2 1 2 p 0 2 (1)/(2^(p_(0)-2))\frac{1}{2^{p_{0}-2}}12p02 0 1 2 p 0 2 1 2 p 0 2 (1)/(2^(p_(0)-2))\frac{1}{2^{p_{0}-2}}12p02 0
vdots\vdots dots\ldots
1 2 2 1 2 2 (1)/(2^(2))\frac{1}{2^{2}}122 0 0 0 dots\ldots 1 2 2 1 2 2 (1)/(2^(2))\frac{1}{2^{2}}122 0
1 2 1 2 (1)/(2)\frac{1}{2}12 0 0 0 dots\ldots 0 1 2 1 2 (1)/(2)\frac{1}{2}12 0
0 0 0 dots\ldots 0 0 1
0 (1)/(2^(p_(0)-1)) (1)/(2^(p_(0)-1)) 0 (1)/(2^(p_(0)-2)) 0 (1)/(2^(p_(0)-2)) 0 vdots dots (1)/(2^(2)) 0 0 0 dots (1)/(2^(2)) 0 (1)/(2) 0 0 0 dots 0 (1)/(2) 0 0 0 0 dots 0 0 1| | 0 | | | | | | | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | $\frac{1}{2^{p_{0}-1}}$ | $\frac{1}{2^{p_{0}-1}}$ | 0 | | | | | | | $\frac{1}{2^{p_{0}-2}}$ | 0 | $\frac{1}{2^{p_{0}-2}}$ | 0 | | | | | | $\vdots$ | | $\ldots$ | | | | | | | $\frac{1}{2^{2}}$ | 0 | 0 | 0 | $\ldots$ | $\frac{1}{2^{2}}$ | 0 | | | $\frac{1}{2}$ | 0 | 0 | 0 | $\ldots$ | 0 | $\frac{1}{2}$ | 0 | | | 0 | 0 | 0 | $\ldots$ | 0 | 0 | 1 |
Proof. Following [2], for a p 0 p 0 p_(0)p_{0}p0 stages explicit Runge-Kutta method the Butcher array can be represented as follows:
c c ccc A A AAA
b T b T b^(T)b^{T}bT
c A b^(T)| $c$ | $A$ | | :---: | :---: | | | $b^{T}$ |
Here A = ( a i j ) , i , j = 1 , p 0 A = a i j , i , j = 1 , p 0 ¯ A=(a_(ij)),i,j= bar(1,p_(0))A=\left(a_{i j}\right), i, j=\overline{1, p_{0}}A=(aij),i,j=1,p0 is strictly a inferior lower matrix (i.e. a i j = 0 a i j = 0 a_(ij)=0a_{i j}=0aij=0 for j i j i j >= ij \geq iji ), b T = [ b 1 , , b p 0 ] b T = b 1 , , b p 0 b^(T)=[b_(1),dots,b_(p_(0))]b^{T}=\left[b_{1}, \ldots, b_{p_{0}}\right]bT=[b1,,bp0] and c T = [ c 1 , , c p 0 ] , 0 c i < 1 c T = c 1 , , c p 0 , 0 c i < 1 c^(T)=[c_(1),dots,c_(p_(0))],0 <= c_(i) < 1c^{T}=\left[c_{1}, \ldots, c_{p_{0}}\right], 0 \leq c_{i}<1cT=[c1,,cp0],0ci<1. The values k i , i = 1 , p 0 k i , i = 1 , p 0 ¯ k_(i),i= bar(1,p_(0))k_{i}, i=\overline{1, p_{0}}ki,i=1,p0, for the intermediate points x s + c i h s x s + c i h s x_(s)+c_(i)h_(s)x_{s}+c_{i} h_{s}xs+cihs between x s x s x_(s)x_{s}xs and x s + 1 , s = 0 , N 1 x s + 1 , s = 0 , N 1 ¯ x_(s+1),s= bar(0,N-1)x_{s+1}, s=\overline{0, N-1}xs+1,s=0,N1 are defined as follows:
k 1 = f ( x s , y s ) , k i = f ( x s + c i h s , y s + h s j = 1 i 1 a i j k j ) , i = 2 , p 0 . k 1 = f x s , y s , k i = f x s + c i h s , y s + h s j = 1 i 1 a i j k j , i = 2 , p 0 ¯ . k_(1)=f(x_(s),y_(s)),quadk_(i)=f(x_(s)+c_(i)h_(s),y_(s)+h_(s)sum_(j=1)^(i-1)a_(ij)k_(j)),i= bar(2,p_(0)).k_{1}=f\left(x_{s}, y_{s}\right), \quad k_{i}=f\left(x_{s}+c_{i} h_{s}, y_{s}+h_{s} \sum_{j=1}^{i-1} a_{i j} k_{j}\right), i=\overline{2, p_{0}} .k1=f(xs,ys),ki=f(xs+cihs,ys+hsj=1i1aijkj),i=2,p0.
The approximation for the new point x s + 1 = x s + h s x s + 1 = x s + h s x_(s+1)=x_(s)+h_(s)x_{s+1}=x_{s}+h_{s}xs+1=xs+hs is given by
y s + 1 = y s + h s i = 1 p 0 b i k i . y s + 1 = y s + h s i = 1 p 0 b i k i . y_(s+1)=y_(s)+h_(s)sum_(i=1)^(p_(0))b_(i)k_(i).y_{s+1}=y_{s}+h_{s} \sum_{i=1}^{p_{0}} b_{i} k_{i} .ys+1=ys+hsi=1p0biki.
In the autonomous case, for the above Butcher array we obtain
k 1 = f ( y s ) , k i = f ( y s + h s 1 2 p 0 i + 1 k i 1 ) , i = 2 , p 0 , k 1 = f y s , k i = f y s + h s 1 2 p 0 i + 1 k i 1 , i = 2 , p 0 ¯ , k_(1)=f(y_(s)),quadk_(i)=f(y_(s)+h_(s)(1)/(2^(p_(0)-i+1))k_(i-1)),quad i= bar(2,p_(0)),k_{1}=f\left(y_{s}\right), \quad k_{i}=f\left(y_{s}+h_{s} \frac{1}{2^{p_{0}-i+1}} k_{i-1}\right), \quad i=\overline{2, p_{0}},k1=f(ys),ki=f(ys+hs12p0i+1ki1),i=2,p0,
and, therefore,
(2.9) y s + 1 = y s + h s k p 0 . (2.9) y s + 1 = y s + h s k p 0 . {:(2.9)y_(s+1)=y_(s)+h_(s)k_(p_(0)).:}\begin{equation*} y_{s+1}=y_{s}+h_{s} k_{p_{0}} . \tag{2.9} \end{equation*}(2.9)ys+1=ys+hskp0.
It is easy to see that (2.9) provides a rule of the form (2.8), which concludes the proof.
Next, we present some particular cases, obtained for different values of the parameter p 0 p 0 p_(0)p_{0}p0. For p 0 = 1 p 0 = 1 p_(0)=1p_{0}=1p0=1 we obtain the Euler forward method given by
y s + 1 = y s + h s f ( y s ) , s = 1 , N 1 . y s + 1 = y s + h s f y s , s = 1 , N 1 ¯ . y_(s+1)=y_(s)+h_(s)f(y_(s)),quad s= bar(1,N-1).y_{s+1}=y_{s}+h_{s} f\left(y_{s}\right), \quad s=\overline{1, N-1} .ys+1=ys+hsf(ys),s=1,N1.
For p 0 = 2 p 0 = 2 p_(0)=2p_{0}=2p0=2 we obtain the Midpoint rule given by
y s + 1 = y s + h s f ( y s + 1 2 h s f ( y s ) ) , s = 1 , N 1 . y s + 1 = y s + h s f y s + 1 2 h s f y s , s = 1 , N 1 ¯ . y_(s+1)=y_(s)+h_(s)f(y_(s)+(1)/(2)h_(s)f(y_(s))),quad s= bar(1,N-1).y_{s+1}=y_{s}+h_{s} f\left(y_{s}+\frac{1}{2} h_{s} f\left(y_{s}\right)\right), \quad s=\overline{1, N-1} .ys+1=ys+hsf(ys+12hsf(ys)),s=1,N1.
For p 0 = 3 p 0 = 3 p_(0)=3p_{0}=3p0=3 we obtain the method
(2.10) y s + 1 = y s + h s f ( y s + 1 2 h s f ( y s + 1 2 2 h s f ( y s ) ) ) , s = 1 , N 1 (2.10) y s + 1 = y s + h s f y s + 1 2 h s f y s + 1 2 2 h s f y s , s = 1 , N 1 ¯ {:(2.10)y_(s+1)=y_(s)+h_(s)f(y_(s)+(1)/(2)h_(s)f(y_(s)+(1)/(2^(2))h_(s)f(y_(s))))","quad s= bar(1,N-1):}\begin{equation*} y_{s+1}=y_{s}+h_{s} f\left(y_{s}+\frac{1}{2} h_{s} f\left(y_{s}+\frac{1}{2^{2}} h_{s} f\left(y_{s}\right)\right)\right), \quad s=\overline{1, N-1} \tag{2.10} \end{equation*}(2.10)ys+1=ys+hsf(ys+12hsf(ys+122hsf(ys))),s=1,N1
with the Butcher array
0
1 2 2 1 2 2 (1)/(2^(2))\frac{1}{2^{2}}122 1 2 2 1 2 2 (1)/(2^(2))\frac{1}{2^{2}}122 0
1 2 1 2 (1)/(2)\frac{1}{2}12 0 1 2 1 2 (1)/(2)\frac{1}{2}12 0
0 0 1
0 (1)/(2^(2)) (1)/(2^(2)) 0 (1)/(2) 0 (1)/(2) 0 0 0 1| | 0 | | | | :---: | :---: | :---: | :---: | | $\frac{1}{2^{2}}$ | $\frac{1}{2^{2}}$ | 0 | | | $\frac{1}{2}$ | 0 | $\frac{1}{2}$ | 0 | | | 0 | 0 | 1 |
For p 0 = 4 p 0 = 4 p_(0)=4p_{0}=4p0=4 we obtain the method
(2.11) y s + 1 = y s + h s f ( y s + 1 2 h s f ( y s + 1 2 2 h s f ( y s + 1 2 3 h s f ( y s ) ) ) ) , s = 1 , N 1 (2.11) y s + 1 = y s + h s f y s + 1 2 h s f y s + 1 2 2 h s f y s + 1 2 3 h s f y s , s = 1 , N 1 ¯ {:(2.11)y_(s+1)=y_(s)+h_(s)f(y_(s)+(1)/(2)h_(s)f(y_(s)+(1)/(2^(2))h_(s)f(y_(s)+(1)/(2^(3))h_(s)f(y_(s)))))","quad s= bar(1,N-1):}\begin{equation*} y_{s+1}=y_{s}+h_{s} f\left(y_{s}+\frac{1}{2} h_{s} f\left(y_{s}+\frac{1}{2^{2}} h_{s} f\left(y_{s}+\frac{1}{2^{3}} h_{s} f\left(y_{s}\right)\right)\right)\right), \quad s=\overline{1, N-1} \tag{2.11} \end{equation*}(2.11)ys+1=ys+hsf(ys+12hsf(ys+122hsf(ys+123hsf(ys)))),s=1,N1
with the Butcher array
0
1 2 3 1 2 3 (1)/(2^(3))\frac{1}{2^{3}}123 1 2 3 1 2 3 (1)/(2^(3))\frac{1}{2^{3}}123 0
1 2 2 1 2 2 (1)/(2^(2))\frac{1}{2^{2}}122 0 1 2 2 1 2 2 (1)/(2^(2))\frac{1}{2^{2}}122 0
1 2 1 2 (1)/(2)\frac{1}{2}12 0 0 1 2 1 2 (1)/(2)\frac{1}{2}12 0
0 0 0 1
0 (1)/(2^(3)) (1)/(2^(3)) 0 (1)/(2^(2)) 0 (1)/(2^(2)) 0 (1)/(2) 0 0 (1)/(2) 0 0 0 0 1| | 0 | | | | | :---: | :---: | :---: | :---: | :---: | | $\frac{1}{2^{3}}$ | $\frac{1}{2^{3}}$ | 0 | | | | $\frac{1}{2^{2}}$ | 0 | $\frac{1}{2^{2}}$ | 0 | | | $\frac{1}{2}$ | 0 | 0 | $\frac{1}{2}$ | 0 | | | 0 | 0 | 0 | 1 |
In the next sections, for the methods (2.10) and (2.11), we study the local truncation error, stability, consistency, convergence order and we compare them with others known numerical methods.

3 Local Truncation error

In this section we study the local truncation error and convergence order for the method (2.8) when p 0 3 p 0 3 p_(0) >= 3p_{0} \geq 3p03, and we determine the bounds for the coefficient of principal local truncation error, as well.
As in [4], we suppose that
(3.1) f < M and f ( j ) < L j M j 1 on [ x 0 , x 0 + T ] , (3.1) f < M  and  f ( j ) < L j M j 1  on  x 0 , x 0 + T , {:(3.1)||f|| < M" and "||f^((j))|| < (L^(j))/(M^(j-1))" on "[x_(0),x_(0)+T]",":}\begin{equation*} \|f\|<M \text { and }\left\|f^{(j)}\right\|<\frac{L^{j}}{M^{j-1}} \text { on }\left[x_{0}, x_{0}+T\right], \tag{3.1} \end{equation*}(3.1)f<M and f(j)<LjMj1 on [x0,x0+T],
where f = sup { | f ( t ) | : t I } f = sup { | f ( t ) | : t I } ||f||=s u p{|f(t)|:t in I}\|f\|=\sup \{|\mathrm{f}(t)|: t \in I\}f=sup{|f(t)|:tI} and M , L M , L M,LM, LM,L are positive real numbers.
The convergence order of the method is provided in the following result.
Theorem 3.1 The method (2.8) has convergence order 2 and the coefficient of principal local truncation error C 3 C 3 C_(3)C_{3}C3 has the following bound
(3.2) C 3 1 12 M L 2 (3.2) C 3 1 12 M L 2 {:(3.2)||C_(3)|| <= (1)/(12)ML^(2):}\begin{equation*} \left\|C_{3}\right\| \leq \frac{1}{12} M L^{2} \tag{3.2} \end{equation*}(3.2)C3112ML2
Proof. In order to obtain the local truncation error of the method (2.8), following 2] we consider the operator
(3.3) L [ z ( x ) , h ] = z ( x + h ) z ( x ) h f ( z ( x ) + 1 2 h f ( z ( x ) + + 1 2 p 0 1 h f ( z ( x ) ) ) (3.3) L [ z ( x ) , h ] = z ( x + h ) z ( x ) h f z ( x ) + 1 2 h f z ( x ) + + 1 2 p 0 1 h f ( z ( x ) ) {:(3.3)L[z(x)","h]=z(x+h)-z(x)-hf(z(x)+(1)/(2)hf(z(x)+dots+(1)/(2^(p_(0)-1))hf(z(x))dots):}:}\begin{equation*} \mathcal{L}[z(x), h]=z(x+h)-z(x)-h f\left(z(x)+\frac{1}{2} h f\left(z(x)+\ldots+\frac{1}{2^{p_{0}-1}} h f(z(x)) \ldots\right)\right. \tag{3.3} \end{equation*}(3.3)L[z(x),h]=z(x+h)z(x)hf(z(x)+12hf(z(x)++12p01hf(z(x)))
where z z zzz is an arbitrary function defined on [ x 0 , x 0 + T ] , 3 x 0 , x 0 + T , 3 [x_(0),x_(0)+T],3\left[x_{0}, x_{0}+T\right], 3[x0,x0+T],3-times differentiable at least and z ( x ) = f ( z ( x ) ) , x [ x 0 , x 0 + T ] z ( x ) = f ( z ( x ) ) , x x 0 , x 0 + T z^(')(x)=f(z(x)),x in[x_(0),x_(0)+T]z^{\prime}(x)=f(z(x)), x \in\left[x_{0}, x_{0}+T\right]z(x)=f(z(x)),x[x0,x0+T]. For p 0 3 p 0 3 p_(0) >= 3p_{0} \geq 3p03, using the Taylor series with respect x x xxx we obtain
L [ z ( x ) , h ] = 1 24 h 3 [ f ( z ( x ) ) ( f ( z ( x ) ) ) 2 + f 2 ( z ( x ) ) f ( z ( x ) ) ] + O ( h 4 ) L [ z ( x ) , h ] = 1 24 h 3 f ( z ( x ) ) f ( z ( x ) ) 2 + f 2 ( z ( x ) ) f ( z ( x ) ) + O h 4 L[z(x),h]=(1)/(24)h^(3)[f(z(x))(f^(')(z(x)))^(2)+f^(2)(z(x))f^('')(z(x))]+O(h^(4))\mathcal{L}[z(x), h]=\frac{1}{24} h^{3}\left[f(z(x))\left(f^{\prime}(z(x))\right)^{2}+f^{2}(z(x)) f^{\prime \prime}(z(x))\right]+O\left(h^{4}\right)L[z(x),h]=124h3[f(z(x))(f(z(x)))2+f2(z(x))f(z(x))]+O(h4)
Using the definition for the convergence order given in [2] we deduce that the method has second-order of accuracy. Also, substituting z z zzz by the exact solution y , x y , x y,xy, xy,x by x s x s x_(s)x_{s}xs, and supposing the localizing assumption y i = y ( x i ) , i = 1 , s y i = y x i , i = 1 , s ¯ y_(i)=y(x_(i)),i= bar(1,s)y_{i}=y\left(x_{i}\right), i=\overline{1, s}yi=y(xi),i=1,s then the local truncation error of the method (see [2]) can be written as
(3.4) T s + 1 = 1 24 h 3 [ f ( y ( x s ) ) ( f ( y ( x s ) ) ) 2 + f 2 ( y ( x s ) ) f ( y ( x s ) ) ] + O ( h 4 ) (3.4) T s + 1 = 1 24 h 3 f y x s f y x s 2 + f 2 y x s f y x s + O h 4 {:(3.4)T_(s+1)=(1)/(24)h^(3)[f(y(x_(s)))(f^(')(y(x_(s))))^(2)+f^(2)(y(x_(s)))f^('')(y(x_(s)))]+O(h^(4)):}\begin{equation*} T_{s+1}=\frac{1}{24} h^{3}\left[f\left(y\left(x_{s}\right)\right)\left(f^{\prime}\left(y\left(x_{s}\right)\right)\right)^{2}+f^{2}\left(y\left(x_{s}\right)\right) f^{\prime \prime}\left(y\left(x_{s}\right)\right)\right]+O\left(h^{4}\right) \tag{3.4} \end{equation*}(3.4)Ts+1=124h3[f(y(xs))(f(y(xs)))2+f2(y(xs))f(y(xs))]+O(h4)
and the coefficient of principal local truncation error is given by
(3.5) C 3 = 1 24 [ f ( y ( x s ) ) ( f ( y ( x s ) ) ) 2 + ( f ( y ( x s ) ) ) 2 f ( y ( x s ) ) ] . (3.5) C 3 = 1 24 f y x s f y x s 2 + f y x s 2 f y x s . {:(3.5)C_(3)=(1)/(24)[f(y(x_(s)))(f^(')(y(x_(s))))^(2)+(f(y(x_(s))))^(2)f^('')(y(x_(s)))].:}\begin{equation*} C_{3}=\frac{1}{24}\left[f\left(y\left(x_{s}\right)\right)\left(f^{\prime}\left(y\left(x_{s}\right)\right)\right)^{2}+\left(f\left(y\left(x_{s}\right)\right)\right)^{2} f^{\prime \prime}\left(y\left(x_{s}\right)\right)\right] . \tag{3.5} \end{equation*}(3.5)C3=124[f(y(xs))(f(y(xs)))2+(f(y(xs)))2f(y(xs))].
Then, using (3.1) we have for C 3 C 3 C_(3)C_{3}C3 the bound
C 3 = 1 24 f f 2 + ( f ) 2 f 1 24 [ L 2 M M 2 + L 2 M ] = 1 12 M L 2 C 3 = 1 24 f f 2 + f 2 f 1 24 L 2 M M 2 + L 2 M = 1 12 M L 2 ||C_(3)||=(1)/(24)||f^('')f^(2)+(f^('))^(2)f|| <= (1)/(24)[(L^(2))/(M)M^(2)+L^(2)M]=(1)/(12)ML^(2)\left\|C_{3}\right\|=\frac{1}{24}\left\|f^{\prime \prime} f^{2}+\left(f^{\prime}\right)^{2} f\right\| \leq \frac{1}{24}\left[\frac{L^{2}}{M} M^{2}+L^{2} M\right]=\frac{1}{12} M L^{2}C3=124ff2+(f)2f124[L2MM2+L2M]=112ML2
which concludes the proof.
Note that the method defined above is a zero-stable method because it verifies root-condition. Also, since the convergence order is 2 we conclude that it satisfies the consistency condition. It follows that our method represents a convergent method, see [2] for details.

4 Stability analysis

In this section we define the stability function and the absolute-stability region for the method (2.8), when p 0 3 p 0 3 p_(0) >= 3p_{0} \geq 3p03. Our main result is the following.
Theorem 4.1 The method (2.8) has the stability function given by:
(4.1) R ( q ) = 1 + k = 1 p 0 1 2 k ( k 1 ) 2 q k , q C (4.1) R ( q ) = 1 + k = 1 p 0 1 2 k ( k 1 ) 2 q k , q C {:(4.1)R(q)=1+sum_(k=1)^(p_(0))(1)/(2(k(k-1))/(2))q^(k)","quad q inC:}\begin{equation*} R(q)=1+\sum_{k=1}^{p_{0}} \frac{1}{2 \frac{k(k-1)}{2}} q^{k}, \quad q \in \mathbb{C} \tag{4.1} \end{equation*}(4.1)R(q)=1+k=1p012k(k1)2qk,qC
Proof. We apply (2.8) to scalar test equation (see [2])
y = λ y , λ C , Re λ < 0 y = λ y , λ C , Re λ < 0 y^(')=lambda y,quad lambda inC,quad Re lambda < 0y^{\prime}=\lambda y, \quad \lambda \in \mathbb{C}, \quad \operatorname{Re} \lambda<0y=λy,λC,Reλ<0
and we obtain the difference equation:
y s + 1 = [ 1 + h λ + 1 2 ( h λ ) 2 + 1 2 3 ( h λ ) 3 + + 1 2 p 0 ( p 0 1 ) 2 ( h λ ) p 0 ] y s y s + 1 = 1 + h λ + 1 2 ( h λ ) 2 + 1 2 3 ( h λ ) 3 + + 1 2 p 0 p 0 1 2 ( h λ ) p 0 y s y_(s+1)=[1+h lambda+(1)/(2)(h lambda)^(2)+(1)/(2^(3))(h lambda)^(3)+dots+(1)/(2^((p_(0)(p_(0)-1))/(2)))(h lambda)^(p_(0))]y_(s)y_{s+1}=\left[1+h \lambda+\frac{1}{2}(h \lambda)^{2}+\frac{1}{2^{3}}(h \lambda)^{3}+\ldots+\frac{1}{2^{\frac{p_{0}\left(p_{0}-1\right)}{2}}}(h \lambda)^{p_{0}}\right] y_{s}ys+1=[1+hλ+12(hλ)2+123(hλ)3++12p0(p01)2(hλ)p0]ys
Denoting q = h λ q = h λ q=h lambdaq=h \lambdaq=hλ the stability function is obtained as: R ( q ) = 1 + k = 1 p 0 1 2 k ( k 1 ) 2 q k R ( q ) = 1 + k = 1 p 0 1 2 k ( k 1 ) 2 q k R(q)=1+sum_(k=1)^(p_(0))(1)/(2^((k(k-1))/(2)))q^(k)R(q)=1+\sum_{k=1}^{p_{0}} \frac{1}{2^{\frac{k(k-1)}{2}}} q^{k}R(q)=1+k=1p012k(k1)2qk.
The absolute-stability region (see [2]) is given by
R = { q C : | R ( q ) | < 1 } = { q C : | 1 + k = 1 p 0 1 2 k ( k 1 ) 2 q k | < 1 } R = { q C : | R ( q ) | < 1 } = q C : 1 + k = 1 p 0 1 2 k ( k 1 ) 2 q k < 1 R={q inC:|R(q)| < 1}={q inC:|1+sum_(k=1)^(p_(0))(1)/(2^((k(k-1))/(2)))q^(k)| < 1}\mathcal{R}=\{q \in \mathbb{C}:|R(q)|<1\}=\left\{q \in \mathbb{C}:\left|1+\sum_{k=1}^{p_{0}} \frac{1}{2^{\frac{k(k-1)}{2}}} q^{k}\right|<1\right\}R={qC:|R(q)|<1}={qC:|1+k=1p012k(k1)2qk|<1}
We study this region in the next section for the methods (2.10) and (2.11).

5 Method (2.8) when p 0 = 3 p 0 = 3 p_(0)=3p_{0}=3p0=3 and p 0 = 4 p 0 = 4 p_(0)=4p_{0}=4p0=4

We restrict ourselves to methods (2.10) and (2.11), which will be studied in the following section. For p 0 5 p 0 5 p_(0) >= 5p_{0} \geq 5p05 methods obtained from (2.8) have a form much more complicated with a high cost of calculus and the results concerning the accuracy are not so outstanding, because from (3.4) we deduce that the convergence order is only 2.
The method (2.10), obtained for p 0 = 3 p 0 = 3 p_(0)=3p_{0}=3p0=3 and defined by
(5.1) y s + 1 = y s + h s f ( y s + 1 2 h s f ( y s + 1 2 2 h s f ( y s ) ) ) (5.1) y s + 1 = y s + h s f y s + 1 2 h s f y s + 1 2 2 h s f y s {:(5.1)y_(s+1)=y_(s)+h_(s)f(y_(s)+(1)/(2)h_(s)f(y_(s)+(1)/(2^(2))h_(s)f(y_(s)))):}\begin{equation*} y_{s+1}=y_{s}+h_{s} f\left(y_{s}+\frac{1}{2} h_{s} f\left(y_{s}+\frac{1}{2^{2}} h_{s} f\left(y_{s}\right)\right)\right) \tag{5.1} \end{equation*}(5.1)ys+1=ys+hsf(ys+12hsf(ys+122hsf(ys)))
requires three evaluations of the function f f fff for each mesh point.
From (3.4) we deduce that the convergence order for this method is 2 , unlike the 3-stages explicit Runge-Kutta methods of order 3 which also require three function evaluations.
But this method has an advantage concerning absolute stability region. From (4.1) we deduce that the absolute stability function for (2.10) is defined by
(5.2) R ( q ) = 1 + q + q 2 2 + q 3 8 (5.2) R ( q ) = 1 + q + q 2 2 + q 3 8 {:(5.2)R(q)=1+q+(q^(2))/(2)+(q^(3))/(8):}\begin{equation*} R(q)=1+q+\frac{q^{2}}{2}+\frac{q^{3}}{8} \tag{5.2} \end{equation*}(5.2)R(q)=1+q+q22+q38
Using the scanning technique (see [2]) the absolute stability region is plotted, using Matlab software, in Figure 1 (a) with continuous line. We know that the absolute stability function for 3 stages explicit Runge-Kutta methods of order 3 (see [2]) is defined by
(5.3) R ( q ) = 1 + q + q 2 2 + q 3 6 (5.3) R ( q ) = 1 + q + q 2 2 + q 3 6 {:(5.3)R(q)=1+q+(q^(2))/(2)+(q^(3))/(6):}\begin{equation*} R(q)=1+q+\frac{q^{2}}{2}+\frac{q^{3}}{6} \tag{5.3} \end{equation*}(5.3)R(q)=1+q+q22+q36
and the absolute stability region is also plotted, using the same technique as above, in Figure 1 (a) with dashed line. We observe that the absolute stability region for method (2.10) is larger than absolute stability region for 3 stages explicit Runge-Kutta methods of order 3. The method (2.11), obtained for p 0 = 4 p 0 = 4 p_(0)=4p_{0}=4p0=4 and defined by
(5.4) y s + 1 = y s + h s f ( y s + 1 2 h s f ( y s + 1 2 2 h s f ( y s + 1 2 3 h s f ( y s ) ) ) ) (5.4) y s + 1 = y s + h s f y s + 1 2 h s f y s + 1 2 2 h s f y s + 1 2 3 h s f y s {:(5.4)y_(s+1)=y_(s)+h_(s)f(y_(s)+(1)/(2)h_(s)f(y_(s)+(1)/(2^(2))h_(s)f(y_(s)+(1)/(2^(3))h_(s)f(y_(s))))):}\begin{equation*} y_{s+1}=y_{s}+h_{s} f\left(y_{s}+\frac{1}{2} h_{s} f\left(y_{s}+\frac{1}{2^{2}} h_{s} f\left(y_{s}+\frac{1}{2^{3}} h_{s} f\left(y_{s}\right)\right)\right)\right) \tag{5.4} \end{equation*}(5.4)ys+1=ys+hsf(ys+12hsf(ys+122hsf(ys+123hsf(ys))))
requires four evaluations of the function f f fff for each mesh point.
Figure 1: Absolute stability regions
From (3.4) we deduce that the convergence order for this method is 2 , and concerning the accuracy it does not compare with 4 stages explicit Runge-Kutta methods of order 4 which also require four function evaluations.
But this method has an advantage concerning absolute stability region. From (4.1) we deduce that the absolute stability function for (2.11) is defined by
(5.5) R ( q ) = 1 + q + q 2 2 + q 3 2 3 + q 4 2 6 (5.5) R ( q ) = 1 + q + q 2 2 + q 3 2 3 + q 4 2 6 {:(5.5)R(q)=1+q+(q^(2))/(2)+(q^(3))/(2^(3))+(q^(4))/(2^(6)):}\begin{equation*} R(q)=1+q+\frac{q^{2}}{2}+\frac{q^{3}}{2^{3}}+\frac{q^{4}}{2^{6}} \tag{5.5} \end{equation*}(5.5)R(q)=1+q+q22+q323+q426
and the absolute stability region is plotted in Figure 1(b) with dashed line. In Figure 1 (b) is also plotted with continuous line the absolute stability region for method (2.10) and we observe that the absolute stability region for method (2.11) is larger than absolute stability region for method (2.10).
We know that the absolute stability function for 4 stages explicit Runge-Kutta methods of order 4 (see [2]) is defined by
(5.6) R ( q ) = 1 + q + q 2 2 + q 3 6 + q 4 24 (5.6) R ( q ) = 1 + q + q 2 2 + q 3 6 + q 4 24 {:(5.6)R(q)=1+q+(q^(2))/(2)+(q^(3))/(6)+(q^(4))/(24):}\begin{equation*} R(q)=1+q+\frac{q^{2}}{2}+\frac{q^{3}}{6}+\frac{q^{4}}{24} \tag{5.6} \end{equation*}(5.6)R(q)=1+q+q22+q36+q424
and the absolute stability region is plotted in Figure 2 (a) with continuous line. In Figure 2 (a) is also plotted the absolute stability region for the method (2.11) with dashed line and we observe that the absolute stability region for method (2.11) is larger than absolute stability region for 4 stages explicit Runge-Kutta methods of order 4.

6 Numerical example

To test the performance of the proposed methods, (2.10) and (2.11), we consider the autonomous initial value problem:
(6.1) { y ( x ) = cos 2 ( y ( x ) ) , x [ 0 , 20 ] y ( 0 ) = 0 (6.1) y ( x ) = cos 2 ( y ( x ) ) , x [ 0 , 20 ] y ( 0 ) = 0 {:(6.1){[y^(')(x)=cos^(2)(y(x))","quad x in[0","20]],[y(0)=0]:}:}\left\{\begin{array}{l} y^{\prime}(x)=\cos ^{2}(y(x)), \quad x \in[0,20] \tag{6.1}\\ y(0)=0 \end{array}\right.(6.1){y(x)=cos2(y(x)),x[0,20]y(0)=0
The exact solution of the problem is y : [ 0 , 20 ] R , y ( x ) = arctan ( x ) y : [ 0 , 20 ] R , y ( x ) = arctan ( x ) y:[0,20]rarrR,y(x)=arctan(x)y:[0,20] \rightarrow \mathbb{R}, y(x)=\arctan (x)y:[0,20]R,y(x)=arctan(x) and is plotted, for the interval [ 0 , 5 ] [ 0 , 5 ] [0,5][0,5][0,5], in Figure 2 ( b ) 2 ( b ) 2(b)2(b)2(b) with continuous line. The numerical solutions,
Figure 2: (a) Absolute stability regions (b) Exact solution and numerical solutions for h = 0.1 h = 0.1 h=0.1h=0.1h=0.1
obtained with the methods (2.10) and (2.11), for the fixed step size h = 0.1 h = 0.1 h=0.1h=0.1h=0.1, are also plotted, using Matlab software, in Figure 2 (b) with circle marker and star marker respectively. We observe a very good agreement between exact solution and numerical solutions.
In Table 1 are presented the results for the method (2.10) and (2.11) and Midpoint rule for different values of h h hhh. The errors have been obtained as the maximum of the absolute errors on the mesh points x s = s h , s = 1 , N x s = s h , s = 1 , N ¯ x_(s)=sh,s= bar(1,N)x_{s}=s h, s=\overline{1, N}xs=sh,s=1,N :
E max = max { | y ( x s ) y s | : s = 0 , 1 , , N } . E max = max y x s y s : s = 0 , 1 , , N . E_(max)=max{|y(x_(s))-y_(s)|:s=0,1,dots,N}.E_{\max }=\max \left\{\left|y\left(x_{s}\right)-y_{s}\right|: s=0,1, \ldots, N\right\} .Emax=max{|y(xs)ys|:s=0,1,,N}.
h h hhh Midpoint ( 2.10 ) ( 2.10 ) (2.10)(2.10)(2.10) ( 2.11 ) ( 2.11 ) (2.11)(2.11)(2.11)
0.1 4.527354 e 4 4.527354 e 4 4.527354 e-44.527354 e-44.527354e4 2.289041 e 4 2.289041 e 4 2.289041 e-42.289041 e-42.289041e4 2.279995 e 4 2.279995 e 4 2.279995 e-42.279995 e-42.279995e4
0.01 4.255123 e 6 4.255123 e 6 4.255123 e-64.255123 e-64.255123e6 2.261048 e 6 2.261048 e 6 2.261048 e-62.261048 e-62.261048e6 2.260270 e 6 2.260270 e 6 2.260270 e-62.260270 e-62.260270e6
10 3 10 3 10^(-3)10^{-3}103 4.228619 e 8 4.228619 e 8 4.228619 e-84.228619 e-84.228619e8 2.257633 e 8 2.257633 e 8 2.257633 e-82.257633 e-82.257633e8 2.257555 e 8 2.257555 e 8 2.257555 e-82.257555 e-82.257555e8
10 4 10 4 10^(-4)10^{-4}104 4.225559 e 10 4.225559 e 10 4.225559 e-104.225559 e-104.225559e10 2.257583 e 10 2.257583 e 10 2.257583 e-102.257583 e-102.257583e10 2.257574 e 10 2.257574 e 10 2.257574 e-102.257574 e-102.257574e10
10 5 10 5 10^(-5)10^{-5}105 4.998224 e 12 4.998224 e 12 4.998224 e-124.998224 e-124.998224e12 2.207456 e 12 2.207456 e 12 2.207456 e-122.207456 e-122.207456e12 2.207456 e 12 2.207456 e 12 2.207456 e-122.207456 e-122.207456e12
10 6 10 6 10^(-6)10^{-6}106 2.032729 e 11 2.032729 e 11 2.032729 e-112.032729 e-112.032729e11 2.032796 e 11 2.032796 e 11 2.032796 e-112.032796 e-112.032796e11 2.032796 e 11 2.032796 e 11 2.032796 e-112.032796 e-112.032796e11
h Midpoint (2.10) (2.11) 0.1 4.527354 e-4 2.289041 e-4 2.279995 e-4 0.01 4.255123 e-6 2.261048 e-6 2.260270 e-6 10^(-3) 4.228619 e-8 2.257633 e-8 2.257555 e-8 10^(-4) 4.225559 e-10 2.257583 e-10 2.257574 e-10 10^(-5) 4.998224 e-12 2.207456 e-12 2.207456 e-12 10^(-6) 2.032729 e-11 2.032796 e-11 2.032796 e-11| $h$ | Midpoint | $(2.10)$ | $(2.11)$ | | :--- | :--- | :--- | :--- | | 0.1 | $4.527354 e-4$ | $2.289041 e-4$ | $2.279995 e-4$ | | 0.01 | $4.255123 e-6$ | $2.261048 e-6$ | $2.260270 e-6$ | | $10^{-3}$ | $4.228619 e-8$ | $2.257633 e-8$ | $2.257555 e-8$ | | $10^{-4}$ | $4.225559 e-10$ | $2.257583 e-10$ | $2.257574 e-10$ | | $10^{-5}$ | $4.998224 e-12$ | $2.207456 e-12$ | $2.207456 e-12$ | | $10^{-6}$ | $2.032729 e-11$ | $2.032796 e-11$ | $2.032796 e-11$ |
We note that when length of the step decreases 10 times then the error magnitude decreases 100 times. This result represents a validation of the fact that the convergence order for methods (2.10) and (2.11) is 2.

References

[1] Crouzeix, M., Mignot, A.L., Analyse numérique des équations différentielles, Masson, Paris, 1989.
[2] Lambert, J. D., Numerical Methods for Ordinary Differential Systems-The Initial Value Problem, John Wiley&Sons, 1990.
[3] Păvăloiu, I., On an approximation formula, Rev. Anal. Numér. Théor. Approx., 26 (1997), ns. 1-2, pp. 179-183.
[4] Ralston, A., Runge-Kutta methods with minimum error bounds, Math. Comp.,16 (1962), no. 80, pp. 431-437.
[5] Traub, J.F., Iterative Methods for the Solution of Equations, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1964.
2012

Related Posts