A class of numerical methods for autonomous initial value problems

Abstract

In this paper we introduce a class of explicit numerical methods for approximating the solutions of scalar initial value problems for first order differential equations, using a nonlinear interpolation formula. We show that the methods generated in this way can be identified as explicit Runge-Kutta methods and we analyze some particular cases. Finally, numerical examples are provided.

Authors

Flavius Patrulescu
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy

Keywords

initial value problem; stability region; convergence order; local truncation error

PDF

Cite this paper as:

F. Pătrulescu, A class of numerical methods for autonomous initial value problems, Rev. Anal. Numer. Theor. Approx., vol. 41, no. 1 (2012), pp. 82-92

About this paper

Publisher Name

Publishing House of the Romanian Academy (Editura Academiei Române), Cluj-Napoca

Print ISSN

1222-9024

Online ISSN

2457-8126/e

Google Scholar Profile

[1] C. Butcher, Numerical Methods for Ordinary Differential Equations, John Willey & Sons, Chichester (2003).
[2] Crouzeix, A.L. Mignot, Analyse numerique des equations differentielles, Masson, Paris (1989).
[3] D. Lambert, Numerical Methods for Ordinary Differential Systems. The Initial Value Problem, John Wiley & Sons, Chichester (1990).
[4] Patrulescu, A Numerical Method for the Solution of an Autonomous Initial Value Problem, Carpathian J. Math., 28, no. 2 (2012), 289-296.
[5] Pavaloiu, On an approximation formula, Rev. Anal. Numer. Theor. Approx.,  26, no. 1-2 (1997), 179-183.
[6] Ralston, Runge-Kutta Methods with Minimum Error Bounds, Math. Comp.,16, no. 80 (1962), 431-437.
[7] F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, Inc., Englewood Cliffs, N.J. (1964).

Paper (preprint) in HTML form

2012_Patrulescu_ANTA_Class_numerical_methods_IVP

A class of numerical methods for autonomous initial value problems

F. PătrulescuTiberiu Popoviciu Institute of Numerical AnalysisP.O. Box 68-1, 400110 Cluj-Napoca, Romania, fpatrulescu@ictp.acad.ro

Abstract

In this paper we introduce a class of explicit numerical methods for approximating the solutions of scalar initial value problems for first order differential equations, using a nonlinear interpolation formula. We show that the methods generated in this way can be identified as explicit Runge-Kutta methods and we analyze some particular cases. Finally, numerical examples are provided.

2010 Mathematics Subject Classification : 65L05, 65L06, 65L20
Keywords: initial value problem, stability region, convergence order, local truncation error

1 Introduction

Consider a scalar initial value problem (IVP):
(1.1) { y = f ( x , y ) , x I y ( x 0 ) = y 0 , (1.1) y = f ( x , y ) , x I y x 0 = y 0 , {:(1.1){[y^(')=f(x","y)","quad x in I],[y(x_(0))=y_(0)","]:}:}\left\{\begin{array}{l} y^{\prime}=f(x, y), \quad x \in I \tag{1.1}\\ y\left(x_{0}\right)=y_{0}, \end{array}\right.(1.1){y=f(x,y),xIy(x0)=y0,
where: I R , y 0 R , f : I × R R I R , y 0 R , f : I × R R I subeR,y_(0)inR,f:I xxRrarrRI \subseteq \mathbb{R}, y_{0} \in \mathbb{R}, f: I \times \mathbb{R} \rightarrow \mathbb{R}IR,y0R,f:I×RR and x 0 I x 0 I x_(0)in Ix_{0} \in Ix0I. We assume that I = [ x 0 , x 0 + T ] I = x 0 , x 0 + T I=[x_(0),x_(0)+T]I=\left[x_{0}, x_{0}+T\right]I=[x0,x0+T], 0 < T < 0 < T < 0 < T < oo0<T<\infty0<T< and the function f f fff satisfies all requirements necessary to insure the existence of a unique solution y y yyy on the bounded interval I I III, see [1], [2] for details.
In this paper we present a numerical method to approximate the solution of IVP (1.1) using a particular case of a nonlinear interpolation formula analyzed in [5] and [7]. Another particular case was utilized in [4].
For a 5-times differentiable function h : I R h : I R h:I rarrRh: I \rightarrow \mathbb{R}h:IR consider the function g : I R g : I R g:I rarrRg: I \rightarrow \mathbb{R}g:IR given by
(1.2) g ( x ) = h ( x 0 ) + 1 2 ( x x 0 ) [ h ( x 0 + α 1 ( x x 0 ) ) + h ( x 0 + α 2 ( x x 0 ) ) ] , x I , (1.2) g ( x ) = h x 0 + 1 2 x x 0 h x 0 + α 1 x x 0 + h x 0 + α 2 x x 0 , x I , {:(1.2)g(x)=h(x_(0))+(1)/(2)(x-x_(0))[h^(')(x_(0)+alpha_(1)(x-x_(0)))+h^(')(x_(0)+alpha_(2)(x-x_(0)))]","quad x in I",":}\begin{equation*} g(x)=h\left(x_{0}\right)+\frac{1}{2}\left(x-x_{0}\right)\left[h^{\prime}\left(x_{0}+\alpha_{1}\left(x-x_{0}\right)\right)+h^{\prime}\left(x_{0}+\alpha_{2}\left(x-x_{0}\right)\right)\right], \quad x \in I, \tag{1.2} \end{equation*}(1.2)g(x)=h(x0)+12(xx0)[h(x0+α1(xx0))+h(x0+α2(xx0))],xI,
where α 1 = 3 3 6 α 1 = 3 3 6 alpha_(1)=(3-sqrt3)/(6)\alpha_{1}=\frac{3-\sqrt{3}}{6}α1=336 and α 2 = 3 + 3 6 α 2 = 3 + 3 6 alpha_(2)=(3+sqrt3)/(6)\alpha_{2}=\frac{3+\sqrt{3}}{6}α2=3+36.
It is showed in [5] that the function g g ggg verifies
(1.3) h ( i ) ( x 0 ) = g ( i ) ( x 0 ) , i = 0 , 4 and | h ( x ) g ( x ) | 71 4320 M 5 | x x 0 | 5 , x I (1.3) h ( i ) x 0 = g ( i ) x 0 , i = 0 , 4 ¯  and  | h ( x ) g ( x ) | 71 4320 M 5 x x 0 5 , x I {:(1.3)h^((i))(x_(0))=g^((i))(x_(0))","i= bar(0,4)" and "|h(x)-g(x)| <= (71)/(4320)M_(5)|x-x_(0)|^(5)","x in I:}\begin{equation*} h^{(i)}\left(x_{0}\right)=g^{(i)}\left(x_{0}\right), i=\overline{0,4} \text { and }|h(x)-g(x)| \leq \frac{71}{4320} M_{5}\left|x-x_{0}\right|^{5}, x \in I \tag{1.3} \end{equation*}(1.3)h(i)(x0)=g(i)(x0),i=0,4 and |h(x)g(x)|714320M5|xx0|5,xI
where M 5 = sup x I | h ( 5 ) ( x ) | M 5 = sup x I h ( 5 ) ( x ) M_(5)=s u p_(x in I)|h^((5))(x)|M_{5}=\sup _{x \in I}\left|h^{(5)}(x)\right|M5=supxI|h(5)(x)|. Also, we observe that the coefficients α 1 , α 2 α 1 , α 2 alpha_(1),alpha_(2)\alpha_{1}, \alpha_{2}α1,α2 satisfy the equalities
(1.4) { α 1 + α 2 = 1 , α 1 2 + α 2 2 = 2 3 α 1 3 + α 2 3 = 1 2 , α 1 4 + α 2 4 = 7 18 (1.4) α 1 + α 2 = 1 , α 1 2 + α 2 2 = 2 3 α 1 3 + α 2 3 = 1 2 , α 1 4 + α 2 4 = 7 18 {:(1.4){[alpha_(1)+alpha_(2)=1",",alpha_(1)^(2)+alpha_(2)^(2)=(2)/(3)],[alpha_(1)^(3)+alpha_(2)^(3)=(1)/(2)",",alpha_(1)^(4)+alpha_(2)^(4)=(7)/(18)]:}:}\begin{cases}\alpha_{1}+\alpha_{2}=1, & \alpha_{1}^{2}+\alpha_{2}^{2}=\frac{2}{3} \tag{1.4}\\ \alpha_{1}^{3}+\alpha_{2}^{3}=\frac{1}{2}, & \alpha_{1}^{4}+\alpha_{2}^{4}=\frac{7}{18}\end{cases}(1.4){α1+α2=1,α12+α22=23α13+α23=12,α14+α24=718
In the next sections, for simplicity, we consider only the autonomous case, i.e. f = f ( y ) f = f ( y ) f=f(y)f= f(y)f=f(y). The general case can be treated using similar arguments.
The paper is structured as follows. In Section 2 we derive the numerical method and in Section 3 we analyze its versions corresponding to the particular cases of the parameter. In Section 4 numerical examples are provided to exemplify the results obtained in the previous sections.

2 The numerical method

The interval I = [ x 0 , x 0 + T ] I = x 0 , x 0 + T I=[x_(0),x_(0)+T]I=\left[x_{0}, x_{0}+T\right]I=[x0,x0+T] is partitioned by the point set { x n n = 0 , N , N N } x n n = 0 , N ¯ , N N {x_(n)∣n= bar(0,N),N inN}\left\{x_{n} \mid n=\overline{0, N}, N \in \mathbb{N}\right\}{xnn=0,N,NN}, where x n x n x_(n)x_{n}xn are given by a prescribed rule, and we denote by y n y n y_(n)y_{n}yn a numerical approximation to the exact solution y y yyy of (1.1) at x n x n x_(n)x_{n}xn.
We suppose that the exact solution y y yyy of initial value problem (1.1) is 5 -times differentiable at least. Conditions for regularity of exact solution of an initial value problem can be found in 2 .
Using the results presented above we deduce that there exists an approximation y ~ y ~ tilde(y)\tilde{y}y~ of y y yyy given by
(2.1) y ~ ( x ) = y ( x 0 ) + 1 2 ( x x 0 ) [ y ( x 0 + α 1 ( x x 0 ) ) + y ( x 0 + α 2 ( x x 0 ) ) ] = y ( x 0 ) + 1 2 ( x x 0 ) [ f ( y ( x 0 + α 1 ( x x 0 ) ) ) + f ( y ( x 0 + α 2 ( x x 0 ) ) ) ] , (2.1) y ~ ( x ) = y x 0 + 1 2 x x 0 y x 0 + α 1 x x 0 + y x 0 + α 2 x x 0 = y x 0 + 1 2 x x 0 f y x 0 + α 1 x x 0 + f y x 0 + α 2 x x 0 , {:[(2.1) tilde(y)(x)=y(x_(0))+(1)/(2)(x-x_(0))[y^(')(x_(0)+alpha_(1)(x-x_(0)))+y^(')(x_(0)+alpha_(2)(x-x_(0)))]],[=y(x_(0))+(1)/(2)(x-x_(0))[f(y(x_(0)+alpha_(1)(x-x_(0))))+f(y(x_(0)+alpha_(2)(x-x_(0))))]","]:}\begin{align*} & \tilde{y}(x)=y\left(x_{0}\right)+\frac{1}{2}\left(x-x_{0}\right)\left[y^{\prime}\left(x_{0}+\alpha_{1}\left(x-x_{0}\right)\right)+y^{\prime}\left(x_{0}+\alpha_{2}\left(x-x_{0}\right)\right)\right] \tag{2.1}\\ & =y\left(x_{0}\right)+\frac{1}{2}\left(x-x_{0}\right)\left[f\left(y\left(x_{0}+\alpha_{1}\left(x-x_{0}\right)\right)\right)+f\left(y\left(x_{0}+\alpha_{2}\left(x-x_{0}\right)\right)\right)\right], \end{align*}(2.1)y~(x)=y(x0)+12(xx0)[y(x0+α1(xx0))+y(x0+α2(xx0))]=y(x0)+12(xx0)[f(y(x0+α1(xx0)))+f(y(x0+α2(xx0)))],
for all x I x I x in Ix \in IxI. From (1.3) we deduce that this approximation verifies
y ~ ( i ) ( x 0 ) = y ( i ) ( x 0 ) , i = 0 , 4 and | y ~ ( x ) y ( x ) | < 71 4320 M 5 | x x 0 | 5 , x I y ~ ( i ) x 0 = y ( i ) x 0 , i = 0 , 4 ¯  and  | y ~ ( x ) y ( x ) | < 71 4320 M 5 x x 0 5 , x I tilde(y)^((i))(x_(0))=y^((i))(x_(0)),i= bar(0,4)" and "| tilde(y)(x)-y(x)| < (71)/(4320)M_(5)|x-x_(0)|^(5),x in I\tilde{y}^{(i)}\left(x_{0}\right)=y^{(i)}\left(x_{0}\right), i=\overline{0,4} \text { and }|\tilde{y}(x)-y(x)|<\frac{71}{4320} M_{5}\left|x-x_{0}\right|^{5}, x \in Iy~(i)(x0)=y(i)(x0),i=0,4 and |y~(x)y(x)|<714320M5|xx0|5,xI
where M 5 = sup x I | y ( 5 ) ( x ) | M 5 = sup x I y ( 5 ) ( x ) M_(5)=s u p_(x in I)|y^((5))(x)|M_{5}=\sup _{x \in I}\left|y^{(5)}(x)\right|M5=supxI|y(5)(x)|.
The unknown quantities y ( x 0 + α i ( x x 0 ) ) , i = 1 , 2 y x 0 + α i x x 0 , i = 1 , 2 y(x_(0)+alpha_(i)(x-x_(0))),i=1,2y\left(x_{0}+\alpha_{i}\left(x-x_{0}\right)\right), i=1,2y(x0+αi(xx0)),i=1,2, in (2.1) can be approximated in the same manner and we continue this approximation procedure for the next unknown values y ( x 0 + α 1 p α 2 q ( x x 0 ) ) , p , q = 1 , 2 , y x 0 + α 1 p α 2 q x x 0 , p , q = 1 , 2 , y(x_(0)+alpha_(1)^(p)alpha_(2)^(q)(x-x_(0))),p,q=1,2,dotsy\left(x_{0}+\alpha_{1}^{p} \alpha_{2}^{q}\left(x-x_{0}\right)\right), p, q=1,2, \ldotsy(x0+α1pα2q(xx0)),p,q=1,2,.
Using the notation
u r s ( x ) = y ( x 0 + α 1 r α 2 s ( x x 0 ) ) u r s ( x ) = y x 0 + α 1 r α 2 s x x 0 u_(rs)(x)=y(x_(0)+alpha_(1)^(r)alpha_(2)^(s)(x-x_(0)))u_{r s}(x)=y\left(x_{0}+\alpha_{1}^{r} \alpha_{2}^{s}\left(x-x_{0}\right)\right)urs(x)=y(x0+α1rα2s(xx0))
the algorithm described above can be written for the firsts p p ppp steps in the following compact form
(2.2) u 00 ( x ) = y ( x 0 ) + 1 2 ( x x 0 ) [ f ( u 10 ( x ) ) + f ( u 01 ( x ) ) ] u i j j ( x ) = y ( x 0 ) + 1 2 α 1 i j α 2 j ( x x 0 ) [ f ( u i j + 1 j ( x ) ) + f ( u i j j + 1 ( x ) ) ] j = 0 , , i , i = 1 , , p (2.2) u 00 ( x ) = y x 0 + 1 2 x x 0 f u 10 ( x ) + f u 01 ( x ) u i j j ( x ) = y x 0 + 1 2 α 1 i j α 2 j x x 0 f u i j + 1 j ( x ) + f u i j j + 1 ( x ) j = 0 , , i , i = 1 , , p {:[(2.2)u_(00)(x)=y(x_(0))+(1)/(2)(x-x_(0))[f(u_(10)(x))+f(u_(01)(x))]],[u_(i-jj)(x)=y(x_(0))+(1)/(2)alpha_(1)^(i-j)alpha_(2)^(j)(x-x_(0))[f(u_(i-j+1j)(x))+f(u_(i-jj+1)(x))]],[j=0","dots","i","quad i=1","dots","p]:}\begin{gather*} u_{00}(x)=y\left(x_{0}\right)+\frac{1}{2}\left(x-x_{0}\right)\left[f\left(u_{10}(x)\right)+f\left(u_{01}(x)\right)\right] \tag{2.2}\\ u_{i-j j}(x)=y\left(x_{0}\right)+\frac{1}{2} \alpha_{1}^{i-j} \alpha_{2}^{j}\left(x-x_{0}\right)\left[f\left(u_{i-j+1 j}(x)\right)+f\left(u_{i-j j+1}(x)\right)\right] \\ j=0, \ldots, i, \quad i=1, \ldots, p \end{gather*}(2.2)u00(x)=y(x0)+12(xx0)[f(u10(x))+f(u01(x))]uijj(x)=y(x0)+12α1ijα2j(xx0)[f(uij+1j(x))+f(uijj+1(x))]j=0,,i,i=1,,p
and we can define y ~ ( x ) = u 00 ( x ) , x I y ~ ( x ) = u 00 ( x ) , x I tilde(y)(x)=u_(00)(x),x in I\tilde{y}(x)=u_{00}(x), x \in Iy~(x)=u00(x),xI.
Taking into account that
1 2 α 1 p j α 2 j ( x x 0 ) 0 , j = 0 , p , as p 1 2 α 1 p j α 2 j x x 0 0 , j = 0 , p ¯ ,  as  p (1)/(2)alpha_(1)^(p-j)alpha_(2)^(j)(x-x_(0))rarr0,j= bar(0,p)," as "p rarr oo\frac{1}{2} \alpha_{1}^{p-j} \alpha_{2}^{j}\left(x-x_{0}\right) \rightarrow 0, j=\overline{0, p}, \text { as } p \rightarrow \infty12α1pjα2j(xx0)0,j=0,p, as p
we can choose p = p 0 p = p 0 p=p_(0)p=p_{0}p=p0 such that y ( x 0 ) y x 0 y(x_(0))y\left(x_{0}\right)y(x0) approximates u p 0 0 , u p 0 10 , , u 0 p 0 u p 0 0 , u p 0 10 , , u 0 p 0 u_(p_(0)0),u_(p_(0)-10),dots,u_(0p_(0))u_{p_{0} 0}, u_{p_{0}-10}, \ldots, u_{0 p_{0}}up00,up010,,u0p0 with any given accuracy, since y y yyy is a continuous function. From (2.2) we obtain
(2.3) u ~ 00 ( x ) = y ( x 0 ) + 1 2 ( x x 0 ) [ f ( u ~ 10 ( x ) ) + f ( u ~ 01 ( x ) ) ] u ~ i j j ( x ) = y ( x 0 ) + 1 2 α 1 i j α 2 j ( x x 0 ) [ f ( u ~ i j + 1 j ( x ) ) + f ( u ~ i j j + 1 ( x ) ) ] j = 0 , , i , i = 1 , p 0 1 u ~ p 0 j j ( x ) = y ( x 0 ) , j = 0 , , p 0 (2.3) u ~ 00 ( x ) = y x 0 + 1 2 x x 0 f u ~ 10 ( x ) + f u ~ 01 ( x ) u ~ i j j ( x ) = y x 0 + 1 2 α 1 i j α 2 j x x 0 f u ~ i j + 1 j ( x ) + f u ~ i j j + 1 ( x ) j = 0 , , i , i = 1 , p 0 1 u ~ p 0 j j ( x ) = y x 0 , j = 0 , , p 0 {:[(2.3) tilde(u)_(00)(x)=y(x_(0))+(1)/(2)(x-x_(0))[f( tilde(u)_(10)(x))+f( tilde(u)_(01)(x))]],[ tilde(u)_(i-jj)(x)=y(x_(0))+(1)/(2)alpha_(1)^(i-j)alpha_(2)^(j)(x-x_(0))[f( tilde(u)_(i-j+1j)(x))+f( tilde(u)_(i-jj+1)(x))]],[j=0","dots","i","quad i=1dots","p_(0)-1],[ tilde(u)_(p_(0)-jj)(x)=y(x_(0))","quad j=0","dots","p_(0)]:}\begin{align*} \tilde{u}_{00}(x)= & y\left(x_{0}\right)+\frac{1}{2}\left(x-x_{0}\right)\left[f\left(\tilde{u}_{10}(x)\right)+f\left(\tilde{u}_{01}(x)\right)\right] \tag{2.3}\\ \tilde{u}_{i-j j}(x)= & y\left(x_{0}\right)+\frac{1}{2} \alpha_{1}^{i-j} \alpha_{2}^{j}\left(x-x_{0}\right)\left[f\left(\tilde{u}_{i-j+1 j}(x)\right)+f\left(\tilde{u}_{i-j j+1}(x)\right)\right] \\ & j=0, \ldots, i, \quad i=1 \ldots, p_{0}-1 \\ \tilde{u}_{p_{0}-j j}(x)= & y\left(x_{0}\right), \quad j=0, \ldots, p_{0} \end{align*}(2.3)u~00(x)=y(x0)+12(xx0)[f(u~10(x))+f(u~01(x))]u~ijj(x)=y(x0)+12α1ijα2j(xx0)[f(u~ij+1j(x))+f(u~ijj+1(x))]j=0,,i,i=1,p01u~p0jj(x)=y(x0),j=0,,p0
We use this recurrence relation to construct a numerical method for the scalar initial value problem (1.1). Replacing x x xxx by x 1 x 1 x_(1)x_{1}x1 in (2.3) we obtain an approximation u ~ 00 ( x 1 ) u ~ 00 x 1 tilde(u)_(00)(x_(1))\tilde{u}_{00}\left(x_{1}\right)u~00(x1) for the exact value y ( x 1 ) y x 1 y(x_(1))y\left(x_{1}\right)y(x1). We denote this approximation by y 1 y 1 y_(1)y_{1}y1 and we apply again the algorithm (2.3) for x 2 x 2 x_(2)x_{2}x2, but, instead of y ( x 0 ) = y 0 y x 0 = y 0 y(x_(0))=y_(0)y\left(x_{0}\right)=y_{0}y(x0)=y0, we consider the value y 1 y 1 y_(1)y_{1}y1 previously computed. We repeat this procedure for each x n x n x_(n)x_{n}xn.
Using the notation
u q r n = y n + 1 2 α 1 q α 2 r h n [ f ( u q + 1 r n ) + f ( u q r + 1 n ) ] u q r n = y n + 1 2 α 1 q α 2 r h n f u q + 1 r n + f u q r + 1 n u_(qr)^(n)=y_(n)+(1)/(2)alpha_(1)^(q)alpha_(2)^(r)h_(n)[f(u_(q+1r)^(n))+f(u_(qr+1)^(n))]u_{q r}^{n}=y_{n}+\frac{1}{2} \alpha_{1}^{q} \alpha_{2}^{r} h_{n}\left[f\left(u_{q+1 r}^{n}\right)+f\left(u_{q r+1}^{n}\right)\right]uqrn=yn+12α1qα2rhn[f(uq+1rn)+f(uqr+1n)]
the above algorithm can be written in the following way
(2.4) y n + 1 = y n + 1 2 h n [ f ( u 10 n ) + f ( u 01 n ) ] u i j j n = y n + 1 2 α 1 i j α 2 j h n [ f ( u i j + 1 j n ) + f ( u i j j + 1 n ) ] j = 0 , i , i = 1 , p 0 1 , u p 0 j j n = y n , j = 0 , p 0 (2.4) y n + 1 = y n + 1 2 h n f u 10 n + f u 01 n u i j j n = y n + 1 2 α 1 i j α 2 j h n f u i j + 1 j n + f u i j j + 1 n j = 0 , i ¯ , i = 1 , p 0 1 ¯ , u p 0 j j n = y n , j = 0 , p 0 ¯ {:[(2.4)y_(n+1)=y_(n)+(1)/(2)h_(n)[f(u_(10)^(n))+f(u_(01)^(n))]],[u_(i-jj)^(n)=y_(n)+(1)/(2)alpha_(1)^(i-j)alpha_(2)^(j)h_(n)[f(u_(i-j+1j)^(n))+f(u_(i-jj+1)^(n))]],[quad j= bar(0,i)","quad i= bar(1,p_(0)-1)","],[u_(p_(0)-jj)^(n)=y_(n)","quad j= bar(0,p_(0))]:}\begin{align*} y_{n+1} & =y_{n}+\frac{1}{2} h_{n}\left[f\left(u_{10}^{n}\right)+f\left(u_{01}^{n}\right)\right] \tag{2.4}\\ u_{i-j j}^{n} & =y_{n}+\frac{1}{2} \alpha_{1}^{i-j} \alpha_{2}^{j} h_{n}\left[f\left(u_{i-j+1 j}^{n}\right)+f\left(u_{i-j j+1}^{n}\right)\right] \\ & \quad j=\overline{0, i}, \quad i=\overline{1, p_{0}-1}, \\ u_{p_{0}-j j}^{n} & =y_{n}, \quad j=\overline{0, p_{0}} \end{align*}(2.4)yn+1=yn+12hn[f(u10n)+f(u01n)]uijjn=yn+12α1ijα2jhn[f(uij+1jn)+f(uijj+1n)]j=0,i,i=1,p01,up0jjn=yn,j=0,p0
where h n = x n + 1 x n , n = 0 , N 1 h n = x n + 1 x n , n = 0 , N 1 ¯ h_(n)=x_(n+1)-x_(n),n= bar(0,N-1)h_{n}=x_{n+1}-x_{n}, n=\overline{0, N-1}hn=xn+1xn,n=0,N1.
For p 0 = 1 p 0 = 1 p_(0)=1p_{0}=1p0=1 we obtain the Euler forward method given by
(2.5) y n + 1 = y n + h n f ( y n ) , n = 1 , N 1 . (2.5) y n + 1 = y n + h n f y n , n = 1 , N 1 ¯ . {:(2.5)y_(n+1)=y_(n)+h_(n)f(y_(n))","quad n= bar(1,N-1).:}\begin{equation*} y_{n+1}=y_{n}+h_{n} f\left(y_{n}\right), \quad n=\overline{1, N-1} . \tag{2.5} \end{equation*}(2.5)yn+1=yn+hnf(yn),n=1,N1.
We have the following equivalence result.
Theorem 2.1 The method (2.4) can be identified as an p 0 ( p 0 + 1 ) 2 p 0 p 0 + 1 2 (p_(0)(p_(0)+1))/(2)\frac{p_{0}\left(p_{0}+1\right)}{2}p0(p0+1)2-stages explicit RungeKutta method with the Butcher array given by
c = [ 0 , α 1 p 0 1 , α 1 p 0 2 α 2 , , α 2 p 0 1 , , α 1 p 0 j , α 1 p 0 j 1 α 2 , , α 2 p 0 j , , α 1 , α 2 ] T , b T = [ 0 , 0 , , 0 , 1 2 , 1 2 ] A = [ B p 0 1 B p 0 2 B p 0 3 B 1 ] c = 0 , α 1 p 0 1 , α 1 p 0 2 α 2 , , α 2 p 0 1 , , α 1 p 0 j , α 1 p 0 j 1 α 2 , , α 2 p 0 j , , α 1 , α 2 T , b T = 0 , 0 , , 0 , 1 2 , 1 2 A = B p 0 1 B p 0 2 B p 0 3 B 1 {:[c=[0,alpha_(1)^(p_(0)-1),alpha_(1)^(p_(0)-2)alpha_(2),dots,alpha_(2)^(p_(0)-1),dots,alpha_(1)^(p_(0)-j),alpha_(1)^(p_(0)-j-1)alpha_(2),dots,alpha_(2)^(p_(0)-j),dots,alpha_(1),alpha_(2)]^(T)","],[b^(T)=[0,0,dots,0,(1)/(2),(1)/(2)]],[A=[[B_(p_(0)-1),ddots,,,],[ddots,B_(p_(0)-2),ddots,◯,],[,ddots,B_(p_(0)-3),ddots,],[,◯,ddots,ddots,ddots],[,,,ddots,B_(1)]]]:}\begin{gathered} c=\left[0, \alpha_{1}^{p_{0}-1}, \alpha_{1}^{p_{0}-2} \alpha_{2}, \ldots, \alpha_{2}^{p_{0}-1}, \ldots, \alpha_{1}^{p_{0}-j}, \alpha_{1}^{p_{0}-j-1} \alpha_{2}, \ldots, \alpha_{2}^{p_{0}-j}, \ldots, \alpha_{1}, \alpha_{2}\right]^{T}, \\ b^{T}=\left[0,0, \ldots, 0, \frac{1}{2}, \frac{1}{2}\right] \\ A=\left[\begin{array}{ccccc} B_{p_{0}-1} & \ddots & & & \\ \ddots & B_{p_{0}-2} & \ddots & \bigcirc & \\ & \ddots & B_{p_{0}-3} & \ddots & \\ & \bigcirc & \ddots & \ddots & \ddots \\ & & & \ddots & B_{1} \end{array}\right] \end{gathered}c=[0,α1p01,α1p02α2,,α2p01,,α1p0j,α1p0j1α2,,α2p0j,,α1,α2]T,bT=[0,0,,0,12,12]A=[Bp01Bp02Bp03B1]
where
B p 0 1 = [ 0 α 1 p 0 1 α 1 p 0 2 α 2 α 2 p 0 1 ] , B p 0 j = [ α 1 p 0 j 2 α 1 p 0 j 2 α 1 p 0 j 1 α 2 2 α 1 p 0 j 1 α 2 2 α 2 p 0 j 2 α 2 p 0 j 2 ] , j = 2 , p 0 1 . B p 0 1 = 0 α 1 p 0 1 α 1 p 0 2 α 2 α 2 p 0 1 , B p 0 j = α 1 p 0 j 2 α 1 p 0 j 2 α 1 p 0 j 1 α 2 2 α 1 p 0 j 1 α 2 2 α 2 p 0 j 2 α 2 p 0 j 2 , j = 2 , p 0 1 ¯ . {:[B_(p_(0)-1)=[[0,],[alpha_(1)^(p_(0)-1),],[alpha_(1)^(p_(0)-2)alpha_(2),◯],[dots,],[alpha_(2)^(p_(0)-1),]]","],[B_(p_(0)-j)=[[(alpha_(1)^(p_(0)-j))/(2),(alpha_(1)^(p_(0)-j))/(2),,],[,(alpha_(1)^(p_(0)-j-1)alpha_(2))/(2),(alpha_(1)^(p_(0)-j-1)alpha_(2))/(2),◯],[,◯,ddots,],[,,(alpha_(2)^(p_(0)-j))/(2),(alpha_(2)^(p_(0)-j))/(2)]]","quad j= bar(2,p_(0)-1).]:}\begin{gathered} B_{p_{0}-1}=\left[\begin{array}{cc} 0 & \\ \alpha_{1}^{p_{0}-1} & \\ \alpha_{1}^{p_{0}-2} \alpha_{2} & \bigcirc \\ \ldots & \\ \alpha_{2}^{p_{0}-1} & \end{array}\right], \\ B_{p_{0}-j}=\left[\begin{array}{cccc} \frac{\alpha_{1}^{p_{0}-j}}{2} & \frac{\alpha_{1}^{p_{0}-j}}{2} & & \\ & \frac{\alpha_{1}^{p_{0}-j-1} \alpha_{2}}{2} & \frac{\alpha_{1}^{p_{0}-j-1} \alpha_{2}}{2} & \bigcirc \\ & \bigcirc & \ddots & \\ & & \frac{\alpha_{2}^{p_{0}-j}}{2} & \frac{\alpha_{2}^{p_{0}-j}}{2} \end{array}\right], \quad j=\overline{2, p_{0}-1} . \end{gathered}Bp01=[0α1p01α1p02α2α2p01],Bp0j=[α1p0j2α1p0j2α1p0j1α22α1p0j1α22α2p0j2α2p0j2],j=2,p01.
Proof. We know that for a q q qqq-stages explicit Runge-Kutta method with the Butcher array (see e.g. [1, 3])
c A b T c A b T {:[c,A],[,b^(T)]:}\begin{array}{c|c} c & A \\ \hline & b^{T} \end{array}cAbT
the approximation for the new point x n + 1 = x n + h n x n + 1 = x n + h n x_(n+1)=x_(n)+h_(n)x_{n+1}=x_{n}+h_{n}xn+1=xn+hn is given by
(2.6) y n + 1 = y n + h n i = 1 q b i k i , (2.6) y n + 1 = y n + h n i = 1 q b i k i , {:(2.6)y_(n+1)=y_(n)+h_(n)sum_(i=1)^(q)b_(i)k_(i)",":}\begin{equation*} y_{n+1}=y_{n}+h_{n} \sum_{i=1}^{q} b_{i} k_{i}, \tag{2.6} \end{equation*}(2.6)yn+1=yn+hni=1qbiki,
where
{ k 1 = f ( x n , y n ) , k i = f ( x n + c i h n , y n + h n j = 1 i 1 a i j k j ) , i = 2 , q . k 1 = f x n , y n , k i = f x n + c i h n , y n + h n j = 1 i 1 a i j k j , i = 2 , q ¯ . {[k_(1)=f(x_(n),y_(n))","],[k_(i)=f(x_(n)+c_(i)h_(n),y_(n)+h_(n)sum_(j=1)^(i-1)a_(ij)k_(j))","quad i= bar(2,q).]:}\left\{\begin{array}{l} k_{1}=f\left(x_{n}, y_{n}\right), \\ k_{i}=f\left(x_{n}+c_{i} h_{n}, y_{n}+h_{n} \sum_{j=1}^{i-1} a_{i j} k_{j}\right), \quad i=\overline{2, q} . \end{array}\right.{k1=f(xn,yn),ki=f(xn+cihn,yn+hnj=1i1aijkj),i=2,q.
It easy to see that (2.6) provides for autonomous case and for the above Butcher array a rule of the form (2.4), which concludes the proof.

3 Method (2.4) when p 0 = 2 , 3 p 0 = 2 , 3 p_(0)=2,3p_{0}=2,3p0=2,3 and 4

For p 0 = 2 p 0 = 2 p_(0)=2p_{0}=2p0=2 we obtain
(3.1) y n + 1 = y n + 1 2 h n [ f ( y n + α 1 h n f ( y n ) ) + f ( y n + α 2 h n f ( y n ) ) ] , n = 1 , N 1 . (3.1) y n + 1 = y n + 1 2 h n f y n + α 1 h n f y n + f y n + α 2 h n f y n , n = 1 , N 1 ¯ . {:(3.1)y_(n+1)=y_(n)+(1)/(2)h_(n)[f(y_(n)+alpha_(1)h_(n)f(y_(n)))+f(y_(n)+alpha_(2)h_(n)f(y_(n)))]","quad n= bar(1,N-1).:}\begin{equation*} y_{n+1}=y_{n}+\frac{1}{2} h_{n}\left[f\left(y_{n}+\alpha_{1} h_{n} f\left(y_{n}\right)\right)+f\left(y_{n}+\alpha_{2} h_{n} f\left(y_{n}\right)\right)\right], \quad n=\overline{1, N-1} . \tag{3.1} \end{equation*}(3.1)yn+1=yn+12hn[f(yn+α1hnf(yn))+f(yn+α2hnf(yn))],n=1,N1.
The method (3.1) can be written in a more suitable form
y n + 1 = y n + 1 2 h n [ f ( u 10 n ) + f ( u 01 n ) ] u 10 n = y n + α 1 h n f ( y n ) , u 01 n = y n + α 2 h n f ( y n ) y n + 1 = y n + 1 2 h n f u 10 n + f u 01 n u 10 n = y n + α 1 h n f y n , u 01 n = y n + α 2 h n f y n {:[y_(n+1)=y_(n)+(1)/(2)h_(n)[f(u_(10)^(n))+f(u_(01)^(n))]],[u_(10)^(n)=y_(n)+alpha_(1)h_(n)f(y_(n))","quadu_(01)^(n)=y_(n)+alpha_(2)h_(n)f(y_(n))]:}\begin{aligned} & y_{n+1}=y_{n}+\frac{1}{2} h_{n}\left[f\left(u_{10}^{n}\right)+f\left(u_{01}^{n}\right)\right] \\ & u_{10}^{n}=y_{n}+\alpha_{1} h_{n} f\left(y_{n}\right), \quad u_{01}^{n}=y_{n}+\alpha_{2} h_{n} f\left(y_{n}\right) \end{aligned}yn+1=yn+12hn[f(u10n)+f(u01n)]u10n=yn+α1hnf(yn),u01n=yn+α2hnf(yn)
and can be identified as a 3-stages explicit Runge-Kutta method with the Butcher array given by
0 α 1 α 1 0 α 2 α 2 0 0 0 1 2 1 2 . 0 α 1 α 1 0 α 2 α 2 0 0 0 1 2 1 2 . {:[,0,,],[alpha_(1),alpha_(1),0,],[alpha_(2),alpha_(2),0,0],[,0,(1)/(2),(1)/(2)]:}.\begin{array}{c|ccc} & 0 & & \\ \alpha_{1} & \alpha_{1} & 0 & \\ \alpha_{2} & \alpha_{2} & 0 & 0 \\ \hline & 0 & \frac{1}{2} & \frac{1}{2} \end{array} .0α1α10α2α20001212.
As in (6), we suppose that
(3.2) f < M and f ( j ) < L j M j 1 (3.2) f < M  and  f ( j ) < L j M j 1 {:(3.2)||f|| < M" and "||f^((j))|| < (L^(j))/(M^(j-1)):}\begin{equation*} \|f\|<M \text { and }\left\|f^{(j)}\right\|<\frac{L^{j}}{M^{j-1}} \tag{3.2} \end{equation*}(3.2)f<M and f(j)<LjMj1
where f = sup { | f ( t ) | : t I } f = sup { | f ( t ) | : t I } ||f||=s u p{|f(t)|:t in I}\|f\|=\sup \{|\mathrm{f}(t)|: t \in I\}f=sup{|f(t)|:tI} and M , L M , L M,LM, LM,L are positive real numbers.
The convergence order of the method (3.1) is provided in the following result.
Theorem 3.1 The method (3.1) has convergence order 2 and the coefficient of principal local truncation error C 3 C 3 C_(3)C_{3}C3 has the following bound C 3 1 6 M L 2 C 3 1 6 M L 2 ||C_(3)|| <= (1)/(6)ML^(2)\left\|C_{3}\right\| \leq \frac{1}{6} M L^{2}C316ML2.
Proof. Following [3], in order to obtain the local truncation error of the method (3.1) we consider the operator
(3.3) L [ z ( x ) ; h ] = z ( x + h ) z ( x ) h 2 [ f ( z ( x ) + α 1 f ( z ( x ) ) ) + f ( z ( x ) + α 2 f ( z ( x ) ) ) ] , (3.3) L [ z ( x ) ; h ] = z ( x + h ) z ( x ) h 2 f z ( x ) + α 1 f ( z ( x ) ) + f z ( x ) + α 2 f ( z ( x ) ) , {:(3.3)L[z(x);h]=z(x+h)-z(x)-(h)/(2)[f(z(x)+alpha_(1)f(z(x)))+f(z(x)+alpha_(2)f(z(x)))]",":}\begin{equation*} L[z(x) ; h]=z(x+h)-z(x)-\frac{h}{2}\left[f\left(z(x)+\alpha_{1} f(z(x))\right)+f\left(z(x)+\alpha_{2} f(z(x))\right)\right], \tag{3.3} \end{equation*}(3.3)L[z(x);h]=z(x+h)z(x)h2[f(z(x)+α1f(z(x)))+f(z(x)+α2f(z(x)))],
where z z zzz is an arbitrary function defined on I I III, sufficiently differentiable and z ( x ) = f ( z ( x ) ) z ( x ) = f ( z ( x ) ) z^(')(x)=f(z(x))z^{\prime}(x)= f(z(x))z(x)=f(z(x)), for all x I x I x in Ix \in IxI.
Expanding in Taylor series with respect to x x xxx and using (1.4) we obtain
(3.4) L [ z ( x ) ; h ] = h 3 6 [ f ( z ( x ) ) ] 2 f ( z ( x ) ) + O ( h 4 ) (3.4) L [ z ( x ) ; h ] = h 3 6 f ( z ( x ) ) 2 f ( z ( x ) ) + O h 4 {:(3.4)L[z(x);h]=(h^(3))/(6)[f^(')(z(x))]^(2)f(z(x))+O(h^(4)):}\begin{equation*} L[z(x) ; h]=\frac{h^{3}}{6}\left[f^{\prime}(z(x))\right]^{2} f(z(x))+O\left(h^{4}\right) \tag{3.4} \end{equation*}(3.4)L[z(x);h]=h36[f(z(x))]2f(z(x))+O(h4)
Then, using the definition for the convergence order given in 3] we deduce that the method has second-order of accuracy. Also, substituting z z zzz by the exact solution y y yyy, x x xxx by x n x n x_(n)x_{n}xn and supposing localizing assumption y i = y ( x i ) , i = 1 , n y i = y x i , i = 1 , n ¯ y_(i)=y(x_(i)),i= bar(1,n)y_{i}=y\left(x_{i}\right), i=\overline{1, n}yi=y(xi),i=1,n, the local truncation error can be written as
(3.5) T n + 1 = h 3 6 [ f ( y ( x n ) ) ] 2 f ( y ( x n ) ) + O ( h 4 ) (3.5) T n + 1 = h 3 6 f y x n 2 f y x n + O h 4 {:(3.5)T_(n+1)=(h^(3))/(6)[f^(')(y(x_(n)))]^(2)f(y(x_(n)))+O(h^(4)):}\begin{equation*} T_{n+1}=\frac{h^{3}}{6}\left[f^{\prime}\left(y\left(x_{n}\right)\right)\right]^{2} f\left(y\left(x_{n}\right)\right)+O\left(h^{4}\right) \tag{3.5} \end{equation*}(3.5)Tn+1=h36[f(y(xn))]2f(y(xn))+O(h4)
Moreover, the coefficient of principal local truncation error is given by
(3.6) C 3 = 1 6 [ f ( y ( x n ) ) ] 2 f ( y ( x n ) ) (3.6) C 3 = 1 6 f y x n 2 f y x n {:(3.6)C_(3)=(1)/(6)[f^(')(y(x_(n)))]^(2)f(y(x_(n))):}\begin{equation*} C_{3}=\frac{1}{6}\left[f^{\prime}\left(y\left(x_{n}\right)\right)\right]^{2} f\left(y\left(x_{n}\right)\right) \tag{3.6} \end{equation*}(3.6)C3=16[f(y(xn))]2f(y(xn))
Next, using (3.2) we have for C 3 C 3 C_(3)C_{3}C3 the following bound
C 3 = 1 6 ( f ) 2 f 1 6 f 2 f 1 6 M L 2 C 3 = 1 6 f 2 f 1 6 f 2 f 1 6 M L 2 ||C_(3)||=(1)/(6)||(f^('))^(2)f|| <= (1)/(6)||f^(')||^(2)||f|| <= (1)/(6)ML^(2)\left\|C_{3}\right\|=\frac{1}{6}\left\|\left(f^{\prime}\right)^{2} f\right\| \leq \frac{1}{6}\left\|f^{\prime}\right\|^{2}\|f\| \leq \frac{1}{6} M L^{2}C3=16(f)2f16f2f16ML2
which concludes the proof.
Following [2], for the Runge-Kutta methods in addition of the row-sum condition, A e = C e A e = C e Ae=CeA e=C eAe=Ce, and consistency condition, b T e = 1 b T e = 1 b^(T)e=1b^{T} e=1bTe=1, the sufficient conditions for order 2 are given by
(3.7) b T C e = 1 2 b T A e = 1 2 (3.7) b T C e = 1 2 b T A e = 1 2 {:(3.7)b^(T)Ce=(1)/(2)quadb^(T)Ae=(1)/(2):}\begin{equation*} b^{T} C e=\frac{1}{2} \quad b^{T} A e=\frac{1}{2} \tag{3.7} \end{equation*}(3.7)bTCe=12bTAe=12
where e = [ 1 , , 1 ] e = [ 1 , , 1 ] e=[1,dots,1]e=[1, \ldots, 1]e=[1,,1] and C = diag ( c ) C = diag ( c ) C=diag(c)C=\operatorname{diag}(c)C=diag(c). A simple calculus shows that the method (3.1) verifies conditions (3.7) and it represents a validation for (3.4).
The stability function and absolute stability region of the method (3.1) are provided in the following result.
Theorem 3.2 The method (3.1) has the stability function given by
(3.8) R ( z ) = 1 + z + z 2 2 , z C (3.8) R ( z ) = 1 + z + z 2 2 , z C {:(3.8)R(z)=1+z+(z^(2))/(2)","quad z inC:}\begin{equation*} R(z)=1+z+\frac{z^{2}}{2}, \quad z \in \mathbb{C} \tag{3.8} \end{equation*}(3.8)R(z)=1+z+z22,zC
Proof. Following [3], we apply the method (3.1) to the scalar test equation
y = λ y , λ C , Re λ < 0 y = λ y , λ C , Re λ < 0 y^(')=lambda y,quad lambda inC,quad Re lambda < 0y^{\prime}=\lambda y, \quad \lambda \in \mathbb{C}, \quad \operatorname{Re} \lambda<0y=λy,λC,Reλ<0
and we obtain the difference equation
y n + 1 = y n [ 1 + ( λ h n ) + 1 2 ( α 1 + α 2 ) ( λ h n ) 2 ] y n + 1 = y n 1 + λ h n + 1 2 α 1 + α 2 λ h n 2 y_(n+1)=y_(n)[1+(lambdah_(n))+(1)/(2)(alpha_(1)+alpha_(2))(lambdah_(n))^(2)]y_{n+1}=y_{n}\left[1+\left(\lambda h_{n}\right)+\frac{1}{2}\left(\alpha_{1}+\alpha_{2}\right)\left(\lambda h_{n}\right)^{2}\right]yn+1=yn[1+(λhn)+12(α1+α2)(λhn)2]
Denoting z = λ h n z = λ h n z=lambdah_(n)z=\lambda h_{n}z=λhn and using (1.4) we obtain the stability function
R ( z ) = 1 + z + z 2 2 R ( z ) = 1 + z + z 2 2 R(z)=1+z+(z^(2))/(2)R(z)=1+z+\frac{z^{2}}{2}R(z)=1+z+z22
Note that this function is the same as the stability function for 2 stages explicit Runge-Kutta methods of order 2.
The absolute stability region is given by
(3.9) R = { z C : | 1 + z + 1 2 z 2 | < 1 } (3.9) R = z C : 1 + z + 1 2 z 2 < 1 {:(3.9)R={z inC:|1+z+(1)/(2)z^(2)| < 1}:}\begin{equation*} \mathcal{R}=\left\{z \in \mathbb{C}:\left|1+z+\frac{1}{2} z^{2}\right|<1\right\} \tag{3.9} \end{equation*}(3.9)R={zC:|1+z+12z2|<1}
and it is plotted using scanning technique in Figure 1. The computations have been carried out using Matlab software.
Figure 1: Absolute stability regions for method (2.4) when p 0 = 2 , 3 , 4 p 0 = 2 , 3 , 4 p_(0)=2,3,4p_{0}=2,3,4p0=2,3,4
For p 0 = 3 p 0 = 3 p_(0)=3p_{0}=3p0=3 we obtain the method
(3.10) y n + 1 = y n + 1 2 h n [ f ( u 10 n ) + f ( u 01 n ) ] u 10 n = y n + 1 2 α 1 h n [ f ( u 20 n ) + f ( u 11 n ) ] u 01 n = y n + 1 2 α 2 h n [ f ( u 11 n ) + f ( u 02 n ) ] u 2 j j n = y n + α 1 2 j α 2 j h n f ( y n ) , j = 0 , 2 (3.10) y n + 1 = y n + 1 2 h n f u 10 n + f u 01 n u 10 n = y n + 1 2 α 1 h n f u 20 n + f u 11 n u 01 n = y n + 1 2 α 2 h n f u 11 n + f u 02 n u 2 j j n = y n + α 1 2 j α 2 j h n f y n , j = 0 , 2 ¯ {:[(3.10)y_(n+1)=y_(n)+(1)/(2)h_(n)[f(u_(10)^(n))+f(u_(01)^(n))]],[u_(10)^(n)=y_(n)+(1)/(2)alpha_(1)h_(n)[f(u_(20)^(n))+f(u_(11)^(n))]],[u_(01)^(n)=y_(n)+(1)/(2)alpha_(2)h_(n)[f(u_(11)^(n))+f(u_(02)^(n))]],[u_(2-jj)^(n)=y_(n)+alpha_(1)^(2-j)alpha_(2)^(j)h_(n)f(y_(n))","quad j= bar(0,2)]:}\begin{align*} y_{n+1} & =y_{n}+\frac{1}{2} h_{n}\left[f\left(u_{10}^{n}\right)+f\left(u_{01}^{n}\right)\right] \tag{3.10}\\ u_{10}^{n} & =y_{n}+\frac{1}{2} \alpha_{1} h_{n}\left[f\left(u_{20}^{n}\right)+f\left(u_{11}^{n}\right)\right] \\ u_{01}^{n} & =y_{n}+\frac{1}{2} \alpha_{2} h_{n}\left[f\left(u_{11}^{n}\right)+f\left(u_{02}^{n}\right)\right] \\ u_{2-j j}^{n} & =y_{n}+\alpha_{1}^{2-j} \alpha_{2}^{j} h_{n} f\left(y_{n}\right), \quad j=\overline{0,2} \end{align*}(3.10)yn+1=yn+12hn[f(u10n)+f(u01n)]u10n=yn+12α1hn[f(u20n)+f(u11n)]u01n=yn+12α2hn[f(u11n)+f(u02n)]u2jjn=yn+α12jα2jhnf(yn),j=0,2
which can be identified as a 6 -stages explicit Runge-Kutta method with the Butcher array given by
0
α 1 2 α 1 2 alpha_(1)^(2)\alpha_{1}^{2}α12 α 1 2 α 1 2 alpha_(1)^(2)\alpha_{1}^{2}α12 0
α 1 α 2 α 1 α 2 alpha_(1)alpha_(2)\alpha_{1} \alpha_{2}α1α2 α 1 α 2 α 1 α 2 alpha_(1)alpha_(2)\alpha_{1} \alpha_{2}α1α2 0 0
α 2 2 α 2 2 alpha_(2)^(2)\alpha_{2}^{2}α22 α 2 2 α 2 2 alpha_(2)^(2)\alpha_{2}^{2}α22 0 0 0
α 1 α 1 alpha_(1)\alpha_{1}α1 0 α 1 2 α 1 2 (alpha_(1))/(2)\frac{\alpha_{1}}{2}α12 α 1 2 α 1 2 (alpha_(1))/(2)\frac{\alpha_{1}}{2}α12 0 0
α 2 α 2 alpha_(2)\alpha_{2}α2 0 0 α 2 2 α 2 2 (alpha_(2))/(2)\frac{\alpha_{2}}{2}α22 α 2 2 α 2 2 (alpha_(2))/(2)\frac{\alpha_{2}}{2}α22 0 0
0 0 0 0 1 2 1 2 (1)/(2)\frac{1}{2}12 1 2 1 2 (1)/(2)\frac{1}{2}12
0 alpha_(1)^(2) alpha_(1)^(2) 0 alpha_(1)alpha_(2) alpha_(1)alpha_(2) 0 0 alpha_(2)^(2) alpha_(2)^(2) 0 0 0 alpha_(1) 0 (alpha_(1))/(2) (alpha_(1))/(2) 0 0 alpha_(2) 0 0 (alpha_(2))/(2) (alpha_(2))/(2) 0 0 0 0 0 0 (1)/(2) (1)/(2)| | 0 | | | | | | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | $\alpha_{1}^{2}$ | $\alpha_{1}^{2}$ | 0 | | | | | | $\alpha_{1} \alpha_{2}$ | $\alpha_{1} \alpha_{2}$ | 0 | 0 | | | | | $\alpha_{2}^{2}$ | $\alpha_{2}^{2}$ | 0 | 0 | 0 | | | | $\alpha_{1}$ | 0 | $\frac{\alpha_{1}}{2}$ | $\frac{\alpha_{1}}{2}$ | 0 | 0 | | | $\alpha_{2}$ | 0 | 0 | $\frac{\alpha_{2}}{2}$ | $\frac{\alpha_{2}}{2}$ | 0 | 0 | | | 0 | 0 | 0 | 0 | $\frac{1}{2}$ | $\frac{1}{2}$ |
.
The convergence order of the method (3.10) is provided in the following result.
Theorem 3.3 The method (3.10) has convergence order 3 and the coefficient of the principal local truncation error C 4 C 4 C_(4)C_{4}C4 has the following bound C 4 1 24 M L 3 C 4 1 24 M L 3 ||C_(4)|| <= (1)/(24)ML^(3)\left\|C_{4}\right\| \leq \frac{1}{24} M L^{3}C4124ML3.
Proof. Using (1.4) and similar arguments to those presented in the proof of Theorem 3.1 we obtain
(3.11) L [ z ( x ) ; h ] = h 4 24 [ f ( z ( x ) ) ] 3 f ( z ( x ) ) + O ( h 5 ) . (3.11) L [ z ( x ) ; h ] = h 4 24 f ( z ( x ) ) 3 f ( z ( x ) ) + O h 5 . {:(3.11)L[z(x);h]=(h^(4))/(24)[f^(')(z(x))]^(3)f(z(x))+O(h^(5)).:}\begin{equation*} L[z(x) ; h]=\frac{h^{4}}{24}\left[f^{\prime}(z(x))\right]^{3} f(z(x))+O\left(h^{5}\right) . \tag{3.11} \end{equation*}(3.11)L[z(x);h]=h424[f(z(x))]3f(z(x))+O(h5).
Therefore we deduce that this method has convergence order 3. Also, we can determine the local truncation error as
(3.12) T n + 1 = h 4 24 [ f ( y ( x n ) ) ] 3 f ( y ( x n ) ) + O ( h 5 ) (3.12) T n + 1 = h 4 24 f y x n 3 f y x n + O h 5 {:(3.12)T_(n+1)=(h^(4))/(24)[f^(')(y(x_(n)))]^(3)f(y(x_(n)))+O(h^(5)):}\begin{equation*} T_{n+1}=\frac{h^{4}}{24}\left[f^{\prime}\left(y\left(x_{n}\right)\right)\right]^{3} f\left(y\left(x_{n}\right)\right)+O\left(h^{5}\right) \tag{3.12} \end{equation*}(3.12)Tn+1=h424[f(y(xn))]3f(y(xn))+O(h5)
and the coefficient of principal local truncation error is given by
C 4 = 1 24 [ f ( y ( x n ) ) ] 3 f ( y ( x n ) ) . C 4 = 1 24 f y x n 3 f y x n . C_(4)=(1)/(24)[f^(')(y(x_(n)))]^(3)f(y(x_(n))).C_{4}=\frac{1}{24}\left[f^{\prime}\left(y\left(x_{n}\right)\right)\right]^{3} f\left(y\left(x_{n}\right)\right) .C4=124[f(y(xn))]3f(y(xn)).
Using (3.2) we obtain C 4 < 1 24 M L 3 C 4 < 1 24 M L 3 ||C_(4)|| < (1)/(24)ML^(3)\left\|C_{4}\right\|<\frac{1}{24} M L^{3}C4<124ML3, which concludes the proof.
A simple calculus shows that the method (3.10) verifies the sufficient conditions for order 3, see [2] for details,
(3.13) b T C e = 1 2 , b T C 2 e = 1 3 , b T A C e = 1 6 (3.13) b T C e = 1 2 , b T C 2 e = 1 3 , b T A C e = 1 6 {:(3.13)b^(T)Ce=(1)/(2)","quadb^(T)C^(2)e=(1)/(3)","quadb^(T)ACe=(1)/(6):}\begin{equation*} b^{T} C e=\frac{1}{2}, \quad b^{T} C^{2} e=\frac{1}{3}, \quad b^{T} A C e=\frac{1}{6} \tag{3.13} \end{equation*}(3.13)bTCe=12,bTC2e=13,bTACe=16
and it represents a validation for (3.11).
Theorem 3.4 The method (3.10) has the stability function given by
(3.14) R ( z ) = 1 + z + z 2 2 + z 3 6 , z C . (3.14) R ( z ) = 1 + z + z 2 2 + z 3 6 , z C . {:(3.14)R(z)=1+z+(z^(2))/(2)+(z^(3))/(6)","quad z inC.:}\begin{equation*} R(z)=1+z+\frac{z^{2}}{2}+\frac{z^{3}}{6}, \quad z \in \mathbb{C} . \tag{3.14} \end{equation*}(3.14)R(z)=1+z+z22+z36,zC.
Proof. Using similar arguments as in the proof of Theorem 3.2.
We observe that method (3.10) has the same stability function as 3 stages explicit Runge-Kutta methods of order 3. The absolute stability region defined by
(3.15) R = { z C : | 1 + z + 1 2 z 2 + 1 6 z 3 | < 1 } (3.15) R = z C : 1 + z + 1 2 z 2 + 1 6 z 3 < 1 {:(3.15)R={z inC:|1+z+(1)/(2)z^(2)+(1)/(6)z^(3)| < 1}:}\begin{equation*} \mathcal{R}=\left\{z \in \mathbb{C}:\left|1+z+\frac{1}{2} z^{2}+\frac{1}{6} z^{3}\right|<1\right\} \tag{3.15} \end{equation*}(3.15)R={zC:|1+z+12z2+16z3|<1}
is plotted using scanning technique in Figure 1.
For p 0 = 4 p 0 = 4 p_(0)=4p_{0}=4p0=4 we obtain the method
(3.16) y n + 1 = y n + 1 2 h n [ f ( u 10 n ) + f ( u 01 n ) ] u i j j n = y n + 1 2 α 1 i j α 2 j h n [ f ( u i j + 1 j n ) + f ( u i j j + 1 n ) ] j = 0 , i , i = 1 , 2 , u 3 j j n = y n + α 1 3 j α 2 j h n f ( y n ) j = 0 , 3 (3.16) y n + 1 = y n + 1 2 h n f u 10 n + f u 01 n u i j j n = y n + 1 2 α 1 i j α 2 j h n f u i j + 1 j n + f u i j j + 1 n j = 0 , i ¯ , i = 1 , 2 ¯ , u 3 j j n = y n + α 1 3 j α 2 j h n f y n j = 0 , 3 ¯ {:[(3.16)y_(n+1)=y_(n)+(1)/(2)h_(n)[f(u_(10)^(n))+f(u_(01)^(n))]],[u_(i-jj)^(n)=y_(n)+(1)/(2)alpha_(1)^(i-j)alpha_(2)^(j)h_(n)[f(u_(i-j+1j)^(n))+f(u_(i-jj+1)^(n))]],[quad j= bar(0,i)","quad i= bar(1,2)","],[u_(3-jj)^(n)=y_(n)+alpha_(1)^(3-j)alpha_(2)^(j)h_(n)f(y_(n))quad j= bar(0,3)]:}\begin{align*} y_{n+1}= & y_{n}+\frac{1}{2} h_{n}\left[f\left(u_{10}^{n}\right)+f\left(u_{01}^{n}\right)\right] \tag{3.16}\\ u_{i-j j}^{n}= & y_{n}+\frac{1}{2} \alpha_{1}^{i-j} \alpha_{2}^{j} h_{n}\left[f\left(u_{i-j+1 j}^{n}\right)+f\left(u_{i-j j+1}^{n}\right)\right] \\ & \quad j=\overline{0, i}, \quad i=\overline{1,2}, \\ u_{3-j j}^{n}= & y_{n}+\alpha_{1}^{3-j} \alpha_{2}^{j} h_{n} f\left(y_{n}\right) \quad j=\overline{0,3} \end{align*}(3.16)yn+1=yn+12hn[f(u10n)+f(u01n)]uijjn=yn+12α1ijα2jhn[f(uij+1jn)+f(uijj+1n)]j=0,i,i=1,2,u3jjn=yn+α13jα2jhnf(yn)j=0,3
and can be identified as a 10-stages explicit Runge-Kutta method with the Butcher
array given by
0 α 1 3 α 1 3 alpha_(1)^(3)\alpha_{1}^{3}α13
0
α 1 3 α 1 3 alpha_(1)^(3)\alpha_{1}^{3}α13
0
0 alpha_(1)^(3) 0| 0 | | :--- | | $\alpha_{1}^{3}$ | | 0 |
α 1 2 α 2 α 1 2 α 2 alpha_(1)^(2)alpha_(2)\alpha_{1}^{2} \alpha_{2}α12α2 α 1 2 α 2 α 1 2 α 2 alpha_(1)^(2)alpha_(2)\alpha_{1}^{2} \alpha_{2}α12α2 0 0
α 1 α 2 2 α 1 α 2 2 alpha_(1)alpha_(2)^(2)\alpha_{1} \alpha_{2}^{2}α1α22 α 1 α 2 2 α 1 α 2 2 alpha_(1)alpha_(2)^(2)\alpha_{1} \alpha_{2}^{2}α1α22 0 0 0
α 2 3 α 2 3 alpha_(2)^(3)\alpha_{2}^{3}α23 α 2 3 α 2 3 alpha_(2)^(3)\alpha_{2}^{3}α23 0 0 0 0
α 1 2 α 1 2 alpha_(1)^(2)\alpha_{1}^{2}α12 0 α 1 2 2 α 1 2 2 (alpha_(1)^(2))/(2)\frac{\alpha_{1}^{2}}{2}α122 α 1 2 2 α 1 2 2 (alpha_(1)^(2))/(2)\frac{\alpha_{1}^{2}}{2}α122 0 0 0
α 1 α 2 α 1 α 2 alpha_(1)alpha_(2)\alpha_{1} \alpha_{2}α1α2 0 0 α 1 α 2 2 α 1 α 2 2 (alpha_(1)alpha_(2))/(2)\frac{\alpha_{1} \alpha_{2}}{2}α1α22 α 1 α 2 2 α 1 α 2 2 (alpha_(1)alpha_(2))/(2)\frac{\alpha_{1} \alpha_{2}}{2}α1α22 0 0 0
α 2 2 α 2 2 alpha_(2)^(2)\alpha_{2}^{2}α22 0 0 0 α 2 2 2 α 2 2 2 (alpha_(2)^(2))/(2)\frac{\alpha_{2}^{2}}{2}α222 α 2 2 2 α 2 2 2 (alpha_(2)^(2))/(2)\frac{\alpha_{2}^{2}}{2}α222 0 0 0
α 1 α 1 alpha_(1)\alpha_{1}α1 0 0 0 0 0 α 1 2 α 1 2 (alpha_(1))/(2)\frac{\alpha_{1}}{2}α12 α 1 2 α 1 2 (alpha_(1))/(2)\frac{\alpha_{1}}{2}α12 0 0
α 2 α 2 alpha_(2)\alpha_{2}α2 0 0 0 0 0 0 α 2 2 α 2 2 (alpha_(2))/(2)\frac{\alpha_{2}}{2}α22 α 2 2 α 2 2 (alpha_(2))/(2)\frac{\alpha_{2}}{2}α22 0 0
0 0 0 0 0 0 0 0 1 2 1 2 (1)/(2)\frac{1}{2}12 1 2 1 2 (1)/(2)\frac{1}{2}12
0 alpha_(1)^(3) "0 alpha_(1)^(3) 0" alpha_(1)^(2)alpha_(2) alpha_(1)^(2)alpha_(2) 0 0 alpha_(1)alpha_(2)^(2) alpha_(1)alpha_(2)^(2) 0 0 0 alpha_(2)^(3) alpha_(2)^(3) 0 0 0 0 alpha_(1)^(2) 0 (alpha_(1)^(2))/(2) (alpha_(1)^(2))/(2) 0 0 0 alpha_(1)alpha_(2) 0 0 (alpha_(1)alpha_(2))/(2) (alpha_(1)alpha_(2))/(2) 0 0 0 alpha_(2)^(2) 0 0 0 (alpha_(2)^(2))/(2) (alpha_(2)^(2))/(2) 0 0 0 alpha_(1) 0 0 0 0 0 (alpha_(1))/(2) (alpha_(1))/(2) 0 0 alpha_(2) 0 0 0 0 0 0 (alpha_(2))/(2) (alpha_(2))/(2) 0 0 0 0 0 0 0 0 0 0 (1)/(2) (1)/(2)| 0 $\alpha_{1}^{3}$ | 0 <br> $\alpha_{1}^{3}$ <br> 0 | | | | | | | | | | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | $\alpha_{1}^{2} \alpha_{2}$ | $\alpha_{1}^{2} \alpha_{2}$ | 0 | 0 | | | | | | | | | $\alpha_{1} \alpha_{2}^{2}$ | $\alpha_{1} \alpha_{2}^{2}$ | 0 | 0 | 0 | | | | | | | | $\alpha_{2}^{3}$ | $\alpha_{2}^{3}$ | 0 | 0 | 0 | 0 | | | | | | | $\alpha_{1}^{2}$ | 0 | $\frac{\alpha_{1}^{2}}{2}$ | $\frac{\alpha_{1}^{2}}{2}$ | 0 | 0 | 0 | | | | | | $\alpha_{1} \alpha_{2}$ | 0 | 0 | $\frac{\alpha_{1} \alpha_{2}}{2}$ | $\frac{\alpha_{1} \alpha_{2}}{2}$ | 0 | 0 | 0 | | | | | $\alpha_{2}^{2}$ | 0 | 0 | 0 | $\frac{\alpha_{2}^{2}}{2}$ | $\frac{\alpha_{2}^{2}}{2}$ | 0 | 0 | 0 | | | | $\alpha_{1}$ | 0 | 0 | 0 | 0 | 0 | $\frac{\alpha_{1}}{2}$ | $\frac{\alpha_{1}}{2}$ | 0 | 0 | | | $\alpha_{2}$ | 0 | 0 | 0 | 0 | 0 | 0 | $\frac{\alpha_{2}}{2}$ | $\frac{\alpha_{2}}{2}$ | 0 | 0 | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | $\frac{1}{2}$ | $\frac{1}{2}$ |
Using (1.4) and similar arguments to those used above we deduce that the method (3.16) has convergence order 4. The stability function is given by
(3.17) R ( z ) = 1 + z + z 2 2 + z 3 6 + z 4 24 (3.17) R ( z ) = 1 + z + z 2 2 + z 3 6 + z 4 24 {:(3.17)R(z)=1+z+(z^(2))/(2)+(z^(3))/(6)+(z^(4))/(24):}\begin{equation*} R(z)=1+z+\frac{z^{2}}{2}+\frac{z^{3}}{6}+\frac{z^{4}}{24} \tag{3.17} \end{equation*}(3.17)R(z)=1+z+z22+z36+z424
and we observe that it is the same as stability function for 4 stages explicit RungeKutta methods of order 4. The absolute stability region defined by
(3.18) R = { z C : | 1 + z + 1 2 z 2 + 1 6 z 3 + 1 24 z 4 | < 1 } (3.18) R = z C : 1 + z + 1 2 z 2 + 1 6 z 3 + 1 24 z 4 < 1 {:(3.18)R={z inC:|1+z+(1)/(2)z^(2)+(1)/(6)z^(3)+(1)/(24)z^(4)| < 1}:}\begin{equation*} \mathcal{R}=\left\{z \in \mathbb{C}:\left|1+z+\frac{1}{2} z^{2}+\frac{1}{6} z^{3}+\frac{1}{24} z^{4}\right|<1\right\} \tag{3.18} \end{equation*}(3.18)R={zC:|1+z+12z2+16z3+124z4|<1}
is plotted using scanning technique in Figure 1.
We restrict ourselves to the cases when p 0 p 0 p_(0)p_{0}p0 takes the values 2,3 and 4 . For p 0 5 p 0 5 p_(0) >= 5p_{0} \geq 5p05 methods obtain from (2.4) have a form much more complicated with a high cost of calculus and the results concerning accuracy are not so outstanding. From the next theorem we deduce that the convergence order is not grater than 4 .
Theorem 3.5 For p 0 5 p 0 5 p_(0) >= 5p_{0} \geq 5p05 the method (2.4) has maximal convergence order 4 .
Proof. Following [3], we consider the operator
L [ z ( x ) , h ] = z ( x + h ) z ( x ) h 2 [ f ( z ( x + α 1 h ) ) + f ( z ( x + α 2 h ) ) ] = = z ( x + h ) z ( x ) h 2 [ z ( x + α 1 h ) + z ( x + α 2 h ) ] , L [ z ( x ) , h ] = z ( x + h ) z ( x ) h 2 f z x + α 1 h + f z x + α 2 h = = z ( x + h ) z ( x ) h 2 z x + α 1 h + z x + α 2 h , {:[L[z(x)","h]=z(x+h)-z(x)-(h)/(2)[f(z(x+alpha_(1)h))+f(z(x+alpha_(2)h))]=],[=z(x+h)-z(x)-(h)/(2)[z^(')(x+alpha_(1)h)+z^(')(x+alpha_(2)h)]","]:}\begin{aligned} L[z(x), h] & =z(x+h)-z(x)-\frac{h}{2}\left[f\left(z\left(x+\alpha_{1} h\right)\right)+f\left(z\left(x+\alpha_{2} h\right)\right)\right]= \\ & =z(x+h)-z(x)-\frac{h}{2}\left[z^{\prime}\left(x+\alpha_{1} h\right)+z^{\prime}\left(x+\alpha_{2} h\right)\right], \end{aligned}L[z(x),h]=z(x+h)z(x)h2[f(z(x+α1h))+f(z(x+α2h))]==z(x+h)z(x)h2[z(x+α1h)+z(x+α2h)],
where z z zzz is an arbitrary function defined on I , 5 I , 5 I,5I, 5I,5-times differentiable at least and z ( x ) = f ( z ( x ) ) z ( x ) = f ( z ( x ) ) z^(')(x)=f(z(x))z^{\prime}(x)=f(z(x))z(x)=f(z(x)), for all x I x I x in Ix \in IxI.
Next, using Taylor series with respect to x x xxx and (1.4) we obtain
L [ z ( x ) , h ] = h 5 ( 1 120 1 48 7 18 ) z ( 5 ) ( x ) + O ( h 6 ) . L [ z ( x ) , h ] = h 5 1 120 1 48 7 18 z ( 5 ) ( x ) + O h 6 . L[z(x),h]=h^(5)((1)/(120)-(1)/(48)(7)/(18))z^((5))(x)+O(h^(6)).L[z(x), h]=h^{5}\left(\frac{1}{120}-\frac{1}{48} \frac{7}{18}\right) z^{(5)}(x)+O\left(h^{6}\right) .L[z(x),h]=h5(1120148718)z(5)(x)+O(h6).
Then, from the definition of convergence order given in [3] we deduce that for p 0 5 p 0 5 p_(0) >= 5p_{0} \geq 5p05 method (2.4) has convergence order 4 , which concludes the proof.
Note that the method ( 2.4 ) ( 2.4 ) (2.4)(2.4)(2.4) is a zero-stable method because it verifies root-condition. Also, since the convergence order is grater than 2 for all p 0 2 p 0 2 p_(0) >= 2p_{0} \geq 2p02 we conclude that it satisfies the consistency condition. It follows that our method represents a convergent method, see 3 for details.

4 Numerical examples

We consider the initial value problems.
Example 1
(4.1) { y ( x ) = cos 2 ( y ( x ) ) , x [ 0 , 20 ] y ( 0 ) = 0 (4.1) y ( x ) = cos 2 ( y ( x ) ) , x [ 0 , 20 ] y ( 0 ) = 0 {:(4.1){[y^(')(x)=cos^(2)(y(x))","quad x in[0","20]],[y(0)=0]:}:}\left\{\begin{array}{l} y^{\prime}(x)=\cos ^{2}(y(x)), \quad x \in[0,20] \tag{4.1}\\ y(0)=0 \end{array}\right.(4.1){y(x)=cos2(y(x)),x[0,20]y(0)=0
The exact solution is y : [ 0 , 20 ] R y : [ 0 , 20 ] R y:[0,20]rarrRy:[0,20] \rightarrow \mathbb{R}y:[0,20]R given by y ( x ) = arctan ( x ) y ( x ) = arctan ( x ) y(x)=arctan(x)y(x)=\arctan (x)y(x)=arctan(x).
Example 2
(4.2) { y ( x ) = y ( x ) 4 ( 1 y ( x ) 20 ) , x [ 0 , 20 ] y ( 0 ) = 1 (4.2) y ( x ) = y ( x ) 4 1 y ( x ) 20 , x [ 0 , 20 ] y ( 0 ) = 1 {:(4.2){[y^(')(x)=(y(x))/(4)(1-(y(x))/(20))","quad x in[0","20]],[y(0)=1]:}:}\left\{\begin{array}{l} y^{\prime}(x)=\frac{y(x)}{4}\left(1-\frac{y(x)}{20}\right), \quad x \in[0,20] \tag{4.2}\\ y(0)=1 \end{array}\right.(4.2){y(x)=y(x)4(1y(x)20),x[0,20]y(0)=1
The exact solution is y : [ 0 , 20 ] R y : [ 0 , 20 ] R y:[0,20]rarrRy:[0,20] \rightarrow \mathbb{R}y:[0,20]R given by
y ( x ) = 20 1 + 19 exp ( x / 4 ) y ( x ) = 20 1 + 19 exp ( x / 4 ) y(x)=(20)/(1+19 exp(-x//4))y(x)=\frac{20}{1+19 \exp (-x / 4)}y(x)=201+19exp(x/4)
We apply the methods (3.1), (3.10) and (3.16) with constant steplength to determine numerical solutions for the above examples.
As a measure of the performance we consider the errors obtained as the maximum of the absolute errors on the mesh points x n = n h , n = 0 , N x n = n h , n = 0 , N ¯ x_(n)=nh,n= bar(0,N)x_{n}=n h, n=\overline{0, N}xn=nh,n=0,N
E max = max { | y ( x n ) y n | : n = 0 , 1 , , N } , E max = max y x n y n : n = 0 , 1 , , N , E_(max)=max{|y(x_(n))-y_(n)|:n=0,1,dots,N},E_{\max }=\max \left\{\left|y\left(x_{n}\right)-y_{n}\right|: n=0,1, \ldots, N\right\},Emax=max{|y(xn)yn|:n=0,1,,N},
for different values of step h h hhh.
In the next tables we present results given by the proposed methods (3.1), (3.10) (3.16) in comparison with results given by Adams-Bashforth methods of orders 2, 3, 4 defined by
A B ( 2 ) : y n + 1 y n = h 2 ( 3 f n f n 1 ) A B ( 3 ) : y n + 1 y n = h 12 ( 23 f n 16 f n 1 + 5 f n 2 ) A B ( 4 ) : y n + 1 y n = h 24 ( 55 f n 59 f n 1 + 37 f n 2 9 f n 3 ) . A B ( 2 ) : y n + 1 y n = h 2 3 f n f n 1 A B ( 3 ) : y n + 1 y n = h 12 23 f n 16 f n 1 + 5 f n 2 A B ( 4 ) : y n + 1 y n = h 24 55 f n 59 f n 1 + 37 f n 2 9 f n 3 . {:[A-B(2):,y_(n+1)-y_(n)=(h)/(2)(3f_(n)-f_(n-1))],[A-B(3):,y_(n+1)-y_(n)=(h)/( 12)(23f_(n)-16f_(n-1)+5f_(n-2))],[A-B(4):,y_(n+1)-y_(n)=(h)/( 24)(55f_(n)-59f_(n-1)+37f_(n-2)-9f_(n-3)).]:}\begin{array}{ll} A-B(2): & y_{n+1}-y_{n}=\frac{h}{2}\left(3 f_{n}-f_{n-1}\right) \\ A-B(3): & y_{n+1}-y_{n}=\frac{h}{12}\left(23 f_{n}-16 f_{n-1}+5 f_{n-2}\right) \\ A-B(4): & y_{n+1}-y_{n}=\frac{h}{24}\left(55 f_{n}-59 f_{n-1}+37 f_{n-2}-9 f_{n-3}\right) . \end{array}AB(2):yn+1yn=h2(3fnfn1)AB(3):yn+1yn=h12(23fn16fn1+5fn2)AB(4):yn+1yn=h24(55fn59fn1+37fn29fn3).
Also, we present results given by the explicit Runge-Kutta methods with order equal with number of stages defined by the following Butcher arrays
R K ( 2 ) R K ( 2 ) R-K(2)R-K(2)RK(2)
R K ( 3 ) R K ( 3 ) R-K(3)R-K(3)RK(3)
R K ( 4 ) R K ( 4 ) R-K(4)R-K(4)RK(4)
h h hhh 1.0 e 01 1.0 e 01 1.0 e-011.0 e-011.0e01 1.0 e 02 1.0 e 02 1.0 e-021.0 e-021.0e02 1.0 e 03 1.0 e 03 1.0 e-031.0 e-031.0e03 1.0 e 04 1.0 e 04 1.0 e-041.0 e-041.0e04 1.0 e 05 1.0 e 05 1.0 e-051.0 e-051.0e05
(3.1) 5.755 e 04 5.755 e 04 5.755 e-045.755 e-045.755e04 5.415 e 06 5.415 e 06 5.415 e-065.415 e-065.415e06 5.381 e 08 5.381 e 08 5.381 e-085.381 e-085.381e08 5.378 e 10 5.378 e 10 5.378 e-105.378 e-105.378e10 5.377 e 12 5.377 e 12 5.377 e-125.377 e-125.377e12
A B ( 2 ) A B ( 2 ) A-B(2)A-B(2)AB(2) 2.209 e 03 2.209 e 03 2.209 e-032.209 e-032.209e03 2.251 e 05 2.251 e 05 2.251 e-052.251 e-052.251e05 2.256 e 07 2.256 e 07 2.256 e-072.256 e-072.256e07 2.257 e 09 2.257 e 09 2.257 e-092.257 e-092.257e09 2.257 e 11 2.257 e 11 2.257 e-112.257 e-112.257e11
R K ( 2 ) R K ( 2 ) R-K(2)R-K(2)RK(2) 5.755 e 04 5.755 e 04 5.755 e-045.755 e-045.755e04 5.415 e 06 5.415 e 06 5.415 e-065.415 e-065.415e06 5.381e - 08 5.378 e 10 5.378 e 10 5.378 e-105.378 e-105.378e10 5.377 e 12 5.377 e 12 5.377 e-125.377 e-125.377e12
(3.10) 1.333 e 05 1.333 e 05 1.333 e-051.333 e-051.333e05 1.244 e 08 1.244 e 08 1.244 e-081.244 e-081.244e08 1.235 e 11 1.235 e 11 1.235 e-111.235 e-111.235e11 1.254 e 14 1.254 e 14 1.254 e-141.254 e-141.254e14 2.464 e 14 2.464 e 14 2.464 e-142.464 e-142.464e14
A B ( 3 ) A B ( 3 ) A-B(3)A-B(3)AB(3) 1.109 e 03 1.109 e 03 1.109 e-031.109 e-031.109e03 1.166 e 06 1.166 e 06 1.166 e-061.166 e-061.166e06 1.166e - 09 1.166 e 12 1.166 e 12 1.166 e-121.166 e-121.166e12 2.464 e 14 2.464 e 14 2.464 e-142.464 e-142.464e14
R K ( 3 ) R K ( 3 ) R-K(3)R-K(3)RK(3) 2.028 e 05 2.028 e 05 2.028 e-052.028 e-052.028e05 2.077 e 08 2.077 e 08 2.077 e-082.077 e-082.077e08 2.082 e 11 2.082 e 11 2.082 e-112.082 e-112.082e11 1.976 e 14 1.976 e 14 1.976 e-141.976 e-141.976e14 2.486 e 14 2.486 e 14 2.486 e-142.486 e-142.486e14
(3.16) 2.202 e 07 2.202 e 07 2.202 e-072.202 e-072.202e07 2.050 e 11 2.050 e 11 2.050 e-112.050 e-112.050e11 2.886 e 15 2.886 e 15 2.886 e-152.886 e-152.886e15 1.110 e 14 1.110 e 14 1.110 e-141.110 e-141.110e14 2.464e - 14
A B ( 4 ) A B ( 4 ) A-B(4)A-B(4)AB(4) 1.109 e 03 1.109 e 03 1.109 e-031.109 e-031.109e03 1.166 e 06 1.166 e 06 1.166 e-061.166 e-061.166e06 1.166 e 09 1.166 e 09 1.166 e-091.166 e-091.166e09 1.166 e 12 1.166 e 12 1.166 e-121.166 e-121.166e12 2.442 e 14 2.442 e 14 2.442 e-142.442 e-142.442e14
R K ( 4 ) R K ( 4 ) R-K(4)R-K(4)RK(4) 5.357 e 07 5.357 e 07 5.357 e-075.357 e-075.357e07 5.337 e 11 5.337 e 11 5.337 e-115.337 e-115.337e11 5.884 e 15 5.884 e 15 5.884 e-155.884 e-155.884e15 1.132 e 14 1.132 e 14 1.132 e-141.132 e-141.132e14 2.486 e 14 2.486 e 14 2.486 e-142.486 e-142.486e14
h 1.0 e-01 1.0 e-02 1.0 e-03 1.0 e-04 1.0 e-05 (3.1) 5.755 e-04 5.415 e-06 5.381 e-08 5.378 e-10 5.377 e-12 A-B(2) 2.209 e-03 2.251 e-05 2.256 e-07 2.257 e-09 2.257 e-11 R-K(2) 5.755 e-04 5.415 e-06 5.381e - 08 5.378 e-10 5.377 e-12 (3.10) 1.333 e-05 1.244 e-08 1.235 e-11 1.254 e-14 2.464 e-14 A-B(3) 1.109 e-03 1.166 e-06 1.166e - 09 1.166 e-12 2.464 e-14 R-K(3) 2.028 e-05 2.077 e-08 2.082 e-11 1.976 e-14 2.486 e-14 (3.16) 2.202 e-07 2.050 e-11 2.886 e-15 1.110 e-14 2.464e - 14 A-B(4) 1.109 e-03 1.166 e-06 1.166 e-09 1.166 e-12 2.442 e-14 R-K(4) 5.357 e-07 5.337 e-11 5.884 e-15 1.132 e-14 2.486 e-14| $h$ | $1.0 e-01$ | $1.0 e-02$ | $1.0 e-03$ | $1.0 e-04$ | $1.0 e-05$ | | :--- | :--- | :--- | :--- | :--- | :--- | | (3.1) | $5.755 e-04$ | $5.415 e-06$ | $5.381 e-08$ | $5.378 e-10$ | $5.377 e-12$ | | $A-B(2)$ | $2.209 e-03$ | $2.251 e-05$ | $2.256 e-07$ | $2.257 e-09$ | $2.257 e-11$ | | $R-K(2)$ | $5.755 e-04$ | $5.415 e-06$ | 5.381e - 08 | $5.378 e-10$ | $5.377 e-12$ | | (3.10) | $1.333 e-05$ | $1.244 e-08$ | $1.235 e-11$ | $1.254 e-14$ | $2.464 e-14$ | | $A-B(3)$ | $1.109 e-03$ | $1.166 e-06$ | 1.166e - 09 | $1.166 e-12$ | $2.464 e-14$ | | $R-K(3)$ | $2.028 e-05$ | $2.077 e-08$ | $2.082 e-11$ | $1.976 e-14$ | $2.486 e-14$ | | (3.16) | $2.202 e-07$ | $2.050 e-11$ | $2.886 e-15$ | $1.110 e-14$ | 2.464e - 14 | | $A-B(4)$ | $1.109 e-03$ | $1.166 e-06$ | $1.166 e-09$ | $1.166 e-12$ | $2.442 e-14$ | | $R-K(4)$ | $5.357 e-07$ | $5.337 e-11$ | $5.884 e-15$ | $1.132 e-14$ | $2.486 e-14$ |
Table 1: Emax for Example 1
h h hhh 1.0 e 01 1.0 e 01 1.0 e-011.0 e-011.0e01 1.0 e 02 1.0 e 02 1.0 e-021.0 e-021.0e02 1.0 e 03 1.0 e 03 1.0 e-031.0 e-031.0e03 1.0 e 04 1.0 e 04 1.0 e-041.0 e-041.0e04 1.0 e 05 1.0 e 05 1.0 e-051.0 e-051.0e05
(3.1) 5.878 e 04 5.878 e 04 5.878 e-045.878 e-045.878e04 5.952 e 06 5.952 e 06 5.952 e-065.952 e-065.952e06 5.959 e 08 5.959 e 08 5.959 e-085.959 e-085.959e08 5.960 e 10 5.960 e 10 5.960 e-105.960 e-105.960e10 5.929 e 12 5.929 e 12 5.929 e-125.929 e-125.929e12
A B ( 2 ) A B ( 2 ) A-B(2)A-B(2)AB(2) 1.892 e 03 1.892 e 03 1.892 e-031.892 e-031.892e03 1.907 e 05 1.907 e 05 1.907 e-051.907 e-051.907e05 1.908 e 07 1.908 e 07 1.908 e-071.908 e-071.908e07 1.908 e 09 1.908 e 09 1.908 e-091.908 e-091.908e09 1.892 e 11 1.892 e 11 1.892 e-111.892 e-111.892e11
R K ( 2 ) R K ( 2 ) R-K(2)R-K(2)RK(2) 4.805 e 04 4.805 e 04 4.805 e-044.805 e-044.805e04 4.861 e 06 4.861 e 06 4.861 e-064.861 e-064.861e06 4.867 e 08 4.867 e 08 4.867 e-084.867 e-084.867e08 4.866 e 10 4.866 e 10 4.866 e-104.866 e-104.866e10 4.757 e 12 4.757 e 12 4.757 e-124.757 e-124.757e12
(3.10) 2.725 e 06 2.725 e 06 2.725 e-062.725 e-062.725e06 2.764 e 09 2.764 e 09 2.764 e-092.764 e-092.764e09 2.744 e 12 2.744 e 12 2.744 e-122.744 e-122.744e12 4.192 e 13 4.192 e 13 4.192 e-134.192 e-134.192e13 6.359 e 13 6.359 e 13 6.359 e-136.359 e-136.359e13
A B ( 3 ) A B ( 3 ) A-B(3)A-B(3)AB(3) 1.387 e 03 1.387 e 03 1.387 e-031.387 e-031.387e03 1.404 e 05 1.404 e 05 1.404 e-051.404 e-051.404e05 1.406 e 07 1.406 e 07 1.406 e-071.406 e-071.406e07 1.406 e 09 1.406 e 09 1.406 e-091.406 e-091.406e09 1.404 e 11 1.404 e 11 1.404 e-111.404 e-111.404e11
R K ( 3 ) R K ( 3 ) R-K(3)R-K(3)RK(3) 4.048 e 06 4.048 e 06 4.048 e-064.048 e-064.048e06 4.083 e 09 4.083 e 09 4.083 e-094.083 e-094.083e09 4.137 e 12 4.137 e 12 4.137 e-124.137 e-124.137e12 4.209 e 13 4.209 e 13 4.209 e-134.209 e-134.209e13 6.359 e 13 6.359 e 13 6.359 e-136.359 e-136.359e13
(3.16) 9.951e - 09 9.912 e 13 9.912 e 13 9.912 e-139.912 e-139.912e13 6.750 e 14 6.750 e 14 6.750 e-146.750 e-146.750e14 4.192 e 13 4.192 e 13 4.192 e-134.192 e-134.192e13 6.359 e 13 6.359 e 13 6.359 e-136.359 e-136.359e13
A B ( 4 ) A B ( 4 ) A-B(4)A-B(4)AB(4) 1.420 e 03 1.420 e 03 1.420 e-031.420 e-031.420e03 1.407 e 05 1.407 e 05 1.407 e-051.407 e-051.407e05 1.406e - 07 1.406 e 09 1.406 e 09 1.406 e-091.406 e-091.406e09 1.404 e 11 1.404 e 11 1.404 e-111.404 e-111.404e11
R K ( 4 ) R K ( 4 ) R-K(4)R-K(4)RK(4) 1.779 e 08 1.779 e 08 1.779 e-081.779 e-081.779e08 1.788 e 12 1.788 e 12 1.788 e-121.788 e-121.788e12 6.750 e 14 6.750 e 14 6.750 e-146.750 e-146.750e14 4.192 e 13 4.192 e 13 4.192 e-134.192 e-134.192e13 6.323 e 13 6.323 e 13 6.323 e-136.323 e-136.323e13
h 1.0 e-01 1.0 e-02 1.0 e-03 1.0 e-04 1.0 e-05 (3.1) 5.878 e-04 5.952 e-06 5.959 e-08 5.960 e-10 5.929 e-12 A-B(2) 1.892 e-03 1.907 e-05 1.908 e-07 1.908 e-09 1.892 e-11 R-K(2) 4.805 e-04 4.861 e-06 4.867 e-08 4.866 e-10 4.757 e-12 (3.10) 2.725 e-06 2.764 e-09 2.744 e-12 4.192 e-13 6.359 e-13 A-B(3) 1.387 e-03 1.404 e-05 1.406 e-07 1.406 e-09 1.404 e-11 R-K(3) 4.048 e-06 4.083 e-09 4.137 e-12 4.209 e-13 6.359 e-13 (3.16) 9.951e - 09 9.912 e-13 6.750 e-14 4.192 e-13 6.359 e-13 A-B(4) 1.420 e-03 1.407 e-05 1.406e - 07 1.406 e-09 1.404 e-11 R-K(4) 1.779 e-08 1.788 e-12 6.750 e-14 4.192 e-13 6.323 e-13| $h$ | $1.0 e-01$ | $1.0 e-02$ | $1.0 e-03$ | $1.0 e-04$ | $1.0 e-05$ | | :--- | :--- | :--- | :--- | :--- | :--- | | (3.1) | $5.878 e-04$ | $5.952 e-06$ | $5.959 e-08$ | $5.960 e-10$ | $5.929 e-12$ | | $A-B(2)$ | $1.892 e-03$ | $1.907 e-05$ | $1.908 e-07$ | $1.908 e-09$ | $1.892 e-11$ | | $R-K(2)$ | $4.805 e-04$ | $4.861 e-06$ | $4.867 e-08$ | $4.866 e-10$ | $4.757 e-12$ | | (3.10) | $2.725 e-06$ | $2.764 e-09$ | $2.744 e-12$ | $4.192 e-13$ | $6.359 e-13$ | | $A-B(3)$ | $1.387 e-03$ | $1.404 e-05$ | $1.406 e-07$ | $1.406 e-09$ | $1.404 e-11$ | | $R-K(3)$ | $4.048 e-06$ | $4.083 e-09$ | $4.137 e-12$ | $4.209 e-13$ | $6.359 e-13$ | | (3.16) | 9.951e - 09 | $9.912 e-13$ | $6.750 e-14$ | $4.192 e-13$ | $6.359 e-13$ | | $A-B(4)$ | $1.420 e-03$ | $1.407 e-05$ | 1.406e - 07 | $1.406 e-09$ | $1.404 e-11$ | | $R-K(4)$ | $1.779 e-08$ | $1.788 e-12$ | $6.750 e-14$ | $4.192 e-13$ | $6.323 e-13$ |
Table 2: E max E max  E_("max ")E_{\text {max }}Emax  for Example 2
We analyze the results and we can see that for the above examples the proposed methods are at least comparable to the classical ones.
We observe that if length of the step decreases 10 times then the error magnitude for methods (3.1), (3.10) and (3.16) decreases 10 2 , 10 3 10 2 , 10 3 10^(2),10^(3)10^{2}, 10^{3}102,103 and 10 4 10 4 10^(4)10^{4}104 times, respectively. These results represent a validation of the fact that the convergence orders for the methods (3.1), (3.10) and (3.16) are 2,3 and 4 , respectively.
Because the method is explicit we indicate to be used especially for non-stiff problems, where the requirements on the steplength imposed by stability are no restrictive than the requirements imposed by accuracy, see [1] and [3] for more details. Also, we expect that the method (2.4) gives better results in the case of a variable steplength.

References

[1] Butcher, J. C., Numerical Methods for Ordinary Differential Equations, John Wiley & Sons, 2008.
[2] Crouzeix, M., and Mignot A. L., Analyse numérique des equations différentielles, Masson, Paris, 1989.
[3] Lambert, J. D., Numerical Methods for Ordinary Differential Systems-The Initial Value Problem, John Wiley & Sons, 1990.
[4] Pătrulescu F., A Numerical Method for the Solution of an Autonomous Initial Value Problem, Carpathian J. Math., 28, no. 2, pp. 289-296, 2012.
[5] Păvăloiu, I., On an Approxiation Formula, Rev. Anal. Numér. Théor. Approx., 26, no. 1-2, pp. 179-183, 1997.
[6] Ralston, A., Runge-Kutta Methods with Minimum Error Bounds, Math. Comp., 16, no. 80, pp. 431-437, 1962.
[7] Traub J. F., Iterative Methods for the Solution of Equations, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1964.
2012

Related Posts