A class of numerical methods for autonomous initial value problems\(\S \)

Flavius–Olimpiu Pătrulescu\(^\ast \)

June 12, 2012.

\(\S \) This work has been supported in part by grant no. PN II RU TE 2011-3-0013

\(^\ast \)Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy, P.O. Box 68–1, \(400110\), Cluj-Napoca, Romania, e-mail: fpatrulescu@ictp.acad.ro.

In this paper we introduce a class of explicit numerical methods for approximating the solutions of scalar initial value problems for first order differential equations, using a nonlinear interpolation formula. We show that the methods generated in this way can be identified as explicit Runge-Kutta methods and we analyze some particular cases. Finally, numerical examples are provided.

MSC. 65L05, 65L06, 65L20

Keywords. initial value problem, stability region, convergence order, local truncation error

1 Introduction

Consider a scalar initial value problem (IVP):

\begin{equation} \begin{cases} y’=f(x,y),\quad x\in I\\ y(x_0)=y_0,\label{ec-gen} \end{cases} \end{equation}
1

where: \(I\subseteq \mathbb {R}\), \(y_0\in \mathbb {R}\), \(f:I\times \mathbb {R}\rightarrow \mathbb {R}\) and \(x_0\in I\). We assume that \(I=[x_0,x_0+T]\), \(0{\lt}T{\lt}\infty \) and the function \(f\) satisfies all requirements necessary to insure the existence of a unique solution \(y\) on the bounded interval \(I\), see [ 1 ] , [ 2 ] for details.

In this paper we present a numerical method to approximate the solution of IVP (3) using a particular case of a nonlinear interpolation formula analyzed in [ 5 ] and [ 7 ] . Another particular case was utilized in [ 4 ] .

For a \(5\)-times differentiable function \(h:I\rightarrow \mathbb {R}\) consider the function \(g:I\rightarrow \mathbb {R}\) given by

\begin{equation} g(x)=h(x_0)+\tfrac {1}{2}(x-x_0)[h'(x_0+\alpha _1(x-x_0))+h'(x_0+\alpha _2(x-x_0))],\quad x\in I,\label{definire_g}\end{equation}
4

where \(\alpha _1=\tfrac {3-\sqrt{3}}{6}\) and \(\alpha _2=\tfrac {3+\sqrt{3}}{6}\).

It is showed in [ 5 ] that the function \(g\) verifies

\begin{equation} h^{(i)}(x_0)=g^{(i)}(x_0),\, i=\overline{0,4}\, \label{cond_g} \mbox{ and }\, |h(x)-g(x)|\leq \tfrac {71}{4320}M_5|x-x_0|^5,\, x\in I, \end{equation}
5

where \(\displaystyle M_5=\sup _{x\in I}|h^{(5)}(x)|\). Also, we observe that the coefficients \(\alpha _1\), \(\alpha _2\) satisfy the equalities

\begin{equation} \begin{cases} \alpha _1+\alpha _2=1,\quad \alpha ^2_1+\alpha ^2_2=\tfrac {2}{3},\\ \alpha ^3_1+\alpha ^3_2=\tfrac {1}{2},\quad \alpha ^4_1+\alpha ^4_2=\tfrac {7}{18}.\label{relatii_coeficienti} \end{cases} \end{equation}
6

In the next sections, for simplicity, we consider only the autonomous case, i.e. \(f=f(y)\). The general case can be treated using similar arguments.

The paper is structured as follows. In Section \(2\) we derive the numerical method and in Section \(3\) we analyze its versions corresponding to the particular cases of the parameter. In Section \(4\) numerical examples are provided to exemplify the results obtained in the previous sections.

2 The numerical method

The interval \(I=[x_0,x_0+T]\) is partitioned by the point set \(\{ x_n|n=\overline{0,N}, N\in \mathbb {N}\} \), where \(x_n\) are given by a prescribed rule, and we denote by \(y_n\) a numerical approximation to the exact solution \(y\) of (3) at \(x_n\).

We suppose that the exact solution \(y\) of initial value problem (3) is \(5\)-times differentiable at least. Conditions for regularity of exact solution of an initial value problem can be found in [ 2 ] .

Using the results presented above we deduce that there exists an approximation \(\tilde{y}\) of \(y\) given by

\begin{align} \tilde{y}(x)& =y(x_0)+\tfrac {1}{2}(x-x_0)[y’(x_0+\alpha _1(x-x_0))+y’(x_0+\alpha _2(x-x_0))]\nonumber \\ & =y(x_0)+\tfrac {1}{2}(x-x_0)[f(y(x_0+\alpha _1(x-x_0)))+f(y(x_0+\alpha _2(x-x_0)))],\label{formula_aproximativa} \end{align}

for all \(x\in I\). From (??) we deduce that this approximation verifies

\begin{equation*} \tilde{y}^{(i)}(x_0)=y^{(i)}(x_0),\, i=\overline{0,4} \mbox{ and }|\tilde{y}(x)-y(x)|{\lt}\tfrac {71}{4320}M_5|x-x_0|^5,\, x\in I,\end{equation*}

where \(\displaystyle M_5=\sup _{x\in I}|y^{(5)}(x)|\).

The unknown quantities \(y(x_0+\alpha _i(x-x_0))\), \(i=1,2\), in (??) can be approximated in the same manner and we continue this approximation procedure for the next unknown values \(y(x_0+\alpha ^p_1\alpha ^q_2(x-x_0))\), \(p,\, q=1,2,\ldots \quad \).

Using the notation

\[ u_{rs}(x)=y(x_0+\alpha ^r_1\alpha ^s_2(x-x_0)) \]

the algorithm described above can be written for the firsts \(p\) steps in the following compact form

\begin{eqnarray} \label{algoritm_aproximativ1} u_{00}(x)& =& y(x_0)+\tfrac {1}{2}(x-x_0)[f(u_{10}(x))+f(u_{01}(x))]\\ u_{i-j j}(x)& =& y(x_0)+\tfrac {1}{2}\alpha ^{i-j}_1\alpha ^{j}_2(x-x_0)[f(u_{i-j+1 j}(x))+f(u_{i-j j+1}(x))]\nonumber \\ \quad & \quad & \qquad j=0,\ldots ,i,\quad i=1,\ldots ,p\nonumber \end{eqnarray}

and we can define \(\tilde{y}(x)=u_{00}(x)\), \(x\in I\).

Taking into account that

\[ \tfrac {1}{2}\alpha ^{p-j}_1\alpha ^j_2(x-x_0)\rightarrow 0,\, j=\overline{0,\, p}, \mbox{ as } p\rightarrow \infty , \]

we can choose \(p=p_0\) such that \(y(x_0)\) approximates \(u_{p_00}\), \(u_{p_0-10},\ldots , u_{0p_0}\) with any given accuracy, since \(y\) is a continuous function. From (??) we obtain

\begin{eqnarray} \label{algoritm_aproximativ} \tilde{u}_{00}(x)& =& y(x_0)+\tfrac {1}{2}(x-x_0)[f(\tilde{u}_{10}(x))+f(\tilde{u}_{01}(x))]\\ \tilde{u}_{i-j j}(x)& =& y(x_0)+\tfrac {1}{2}\alpha ^{i-j}_1\alpha ^{j}_2(x-x_0)[f(\tilde{u}_{i-j+1 j}(x))+f(\tilde{u}_{i-j j+1}(x))]\nonumber \\ \quad & \quad & \qquad j=0,\ldots ,i,\quad i=1\ldots ,p_0-1\nonumber \\ \tilde{u}_{p_0-jj}(x)& =& y(x_0),\quad j=0,\ldots ,p_0.\nonumber \end{eqnarray}

We use this recurrence relation to construct a numerical method for the scalar initial value problem (3). Replacing \(x\) by \(x_1\) in (??) we obtain an approximation \(\tilde{u}_{00}(x_1)\) for the exact value \(y(x_1)\). We denote this approximation by \(y_1\) and we apply again the algorithm (??) for \(x_2\), but, instead of \(y(x_0)=y_0\), we consider the value \(y_1\) previously computed. We repeat this procedure for each \(x_n\), \(n=\overline{1,N}\).

Using the notation

\begin{equation*} u^n_{qr}=y_n+\tfrac {1}{2}\alpha ^q_1\alpha ^r_2h_n[f(u^n_{q+1r})+f(u^n_{qr+1})], \end{equation*}

the above algorithm can be written in the following way

\begin{eqnarray} \label{metoda-numerica} y_{n+1}& =& y_n+\tfrac {1}{2}h_n[f(u^n_{10})+f(u^n_{01})]\\ u^n_{i-jj}& =& y_n+\tfrac {1}{2}\alpha ^{i-j}_1\alpha ^{j}_2h_n[f(u^n_{i-j+1j})+f(u^n_{i-jj+1})]\nonumber \\ \quad & \quad & \quad j=\overline{0,\, i},\quad i=\overline{1,\, p_0-1},\nonumber \\ u^n_{p_0-jj}& =& y_n,\quad j=\overline{0,\, p_0}\nonumber \end{eqnarray}

where \(h_n=x_{n+1}-x_n\), \(n=\overline{0,\, N-1}\).

For \(p_0=1\) we obtain the Euler forward method given by

\begin{equation} y_{n+1}=y_n+h_nf(y_n),\quad n=\overline{1,\, N-1}.\end{equation}
13

We have the following equivalence result.

Theorem 1

The method \((\ref{metoda-numerica})\) can be identified as an \(\tfrac {p_0(p_0+1)}{2}\)–stages explicit Runge-Kutta method with the Butcher array given by

\[ c=[0,\alpha ^{p_0-1}_1,\alpha ^{p_0-2}_1\alpha _2,\ldots ,\alpha ^{p_0-1}_2,\ldots ,\alpha ^{p_0-j}_1,\alpha ^{p_0-j-1}_1\alpha _2,\ldots ,\alpha ^{p_0-j}_2,\ldots ,\alpha _1,\alpha _2]^T, \]
\[ b^T=[0,0,\ldots ,0,\tfrac {1}{2},\tfrac {1}{2}] \]
\[ A=\left[ \begin{array}{ccccc}B_{p_0-1}& \ddots \\ \ddots & B_{p_0-2}& \ddots & \bigcirc \\ \quad & \ddots & B_{p_0-3}& \ddots \\ \quad & \bigcirc & \ddots & \ddots & \ddots \\ \quad & \quad & \quad & \ddots & B_1 \end{array} \right] \]

where

\[ B_{p_0-1}=\left[ \begin{array}{cc} 0\\ \alpha ^{p_0-1}_1 & \\ \alpha ^{p_0-2}_1\alpha _2& \bigcirc \\ \ldots \\ \alpha ^{p_0-1}_2 \end{array} \right], \]
\[ \quad B_{p_0-j}= \left[ \begin{array}{cccc} \tfrac {\alpha ^{p_0-j}_1}{2}& \tfrac {\alpha ^{p_0-j}_1}{2}& \quad & \quad \\ \quad & \tfrac {\alpha ^{p_0-j-1}_1\alpha _2}{2}& \tfrac {\alpha ^{p_0-j-1}_1\alpha _2}{2}& \bigcirc \\ \quad & \bigcirc & \quad \ddots \quad \\ \quad & \quad & \tfrac {\alpha ^{p_0-j}_2}{2}& \tfrac {\alpha ^{p_0-j}_2}{2} \end{array} \right],\quad j=\overline{2,p_0-1}. \]
Proof â–¼
We know that for a \(q\)-stages explicit Runge-Kutta method with the Butcher array (see e.g. [ 1 ] , [ 3 ] )

\begin{equation*} \begin{tabular}{c|c} $c$ & $A$ \\ \hline $\quad $ & $b^T$ \end{tabular}\end{equation*}

the approximation for the new point \(x_{n+1}=x_n+h_n\) is given by

\begin{equation} y_{n+1}=y_n+h_n\sum ^q_{i=1}b_ik_i,\label{RKgenerala} \end{equation}
14

where

\begin{equation*} \begin{cases} k_1=f(x_n,y_n),\\ \displaystyle k_i=f(x_n+c_ih_n,y_n+h_n\sum ^{i-1}_{j=1}a_{ij}k_j),\quad i=\overline{2,\, q}. \end{cases}\end{equation*}

It easy to see that (14) provides for autonomous case and for the above Butcher array a rule of the form (12), which concludes the proof.

3 Method (12) when \(p_0=2\), \(3\) and \(4\)

For \(p_0=2\) we obtain

\begin{equation} y_{n+1}=y_n+\tfrac {1}{2}h_n[f(y_n+\alpha _1h_nf(y_n))+f(y_n+\alpha _2h_nf(y_n))],\quad n=\overline{1,\, N-1}.\label{metodaI} \end{equation}
17

The method (17) can be written in a more suitable form

\begin{eqnarray*} & & y_{n+1}=y_n+\tfrac {1}{2}h_n[f(u^n_{10})+f(u^n_{01})],\\ & & u^n_{10}=y_n+\alpha _1h_nf(y_n),\quad u^n_{01}=y_n+\alpha _2h_nf(y_n)\nonumber \end{eqnarray*}

and can be identified as a \(3\)–stages explicit Runge-Kutta method with the Butcher array given by

\begin{equation*} \begin{tabular}{c|c c c} $\quad $ & $0$ & $\quad $ & $\quad $ \\ $\alpha _1$ & $\alpha _1$ & $0$ & $\quad $ \\ $\alpha _2$ & $\alpha _2$ & $0$ & $0$ \\ \hline $\quad $ & $0$ & $\tfrac {1}{2}$ & $\tfrac {1}{2}$ \end{tabular}\end{equation*}

As in [ 6 ] , we suppose that

\begin{equation} \| f\| < M \mbox{ and } \| f^{(j)}\| < \tfrac {L^j}{M^{j-1}},\label{conditii_Ralston} \end{equation}
17

where \(\| f\| =\rm {sup}\{ |f(\it {t})|: \it {t}\in I\} \) and \(M\), \(L\) are positive real numbers.

The convergence order of the method (17) is provided in the following result.

Theorem 2

The method \((\ref{metodaI})\) has convergence order \(2\) and the coefficient of principal local truncation error \(C_3\) has the following bound \(\| C_3\| \leq \tfrac {1}{6}ML^2.\)

Proof â–¼
Following [ 3 ] , in order to obtain the local truncation error of the method (17) we consider the operator

\begin{equation} L[z(x);h]=z(x+h)-z(x)-\tfrac {h}{2}[f(z(x)+\alpha _1f(z(x)))+f(z(x)+\alpha _2f(z(x)))], \end{equation}
18

where \(z\) is an arbitrary function defined on \(I\), sufficiently differentiable and \(z'(x)=f(z(x))\), for all \(x\in I\).

Expanding in Taylor series with respect to \(x\) and using (??) we obtain

\begin{equation} L[z(x);h]=\tfrac {h^3}{6}[f'(z(x))]^2f(z(x))+O(h^4).\label{operatormetodaI} \end{equation}
19

Then, using the definition for the convergence order given in [ 3 ] we deduce that the method has second-order of accuracy. Also, substituting \(z\) by the exact solution \(y\), \(x\) by \(x_n\) and supposing localizing assumption \(y_i=y(x_i)\), \(i=\overline{1,\, n}\) the local truncation error can be written as

\begin{equation} T_{n+1}=\tfrac {h^3}{6}[f'(y(x_n))]^2f(y(x_n))+O(h^4). \end{equation}
20

Moreover, the coefficient of principal local truncation error is given by

\begin{equation} C_3=\tfrac {1}{6}[f'(y(x_n))]^2f(y(x_n)). \end{equation}
21

Next, using (??) we have for \(C_3\) the following bound

\begin{equation*} \| C_3\| =\tfrac {1}{6}\| (f’)^2f\| \leq \tfrac {1}{6}\| f’\| ^2\| f\| \leq \tfrac {1}{6}ML^2, \end{equation*}

which concludes the proof.

Following [ 2 ] , for the Runge-Kutta methods in addition of the row-sum condition, \(Ae=Ce\), and consistency condition, \(b^Te=1\), the sufficient conditions for order \(2\) are given by

\begin{equation} b^TCe=\tfrac {1}{2}\qquad b^TAe=\tfrac {1}{2},\label{conditii_ordin2} \end{equation}
22

where \(e=[1,\ldots ,1]\) and \(C=\mbox{diag}(c)\). A simple calculus shows that the method (17) verifies conditions (??) and it represents a validation for (19).

The stability function and absolute stability region of the method (17) are provided in the following result.

Theorem 3

The method \((\ref{metodaI})\) has the stability function given by

\begin{equation} R(z)=1+z+\tfrac {z^2}{2},\quad z\in \mathbb {C}. \end{equation}
23

Proof â–¼
Following [ 3 ] , we apply the method (17) to the scalar test equation

\begin{equation*} y’=\lambda y,\quad \lambda \in \mathbb {C},\quad {\rm Re}\lambda {\lt}0, \end{equation*}

and we obtain the difference equation

\begin{equation*} y_{n+1}=y_n[1+(\lambda h_n)+\tfrac {1}{2}(\alpha _1+\alpha _2)(\lambda h_n)^2]. \end{equation*}

Denoting \(z=\lambda h_n\) and using (??) we obtain the stability function

\begin{equation*} R(z)=1+z+\tfrac {z^2}{2}. \end{equation*}

Note that this function is the same as the stability function for \(2\) stages explicit Runge-Kutta methods of order \(2\).

The absolute stability region is given by

\begin{equation} \mathcal{R}=\{ z\in \mathbb {C}:|1+z+\tfrac {1}{2}z^2|<1\} \end{equation}
24

and it is plotted using scanning technique in Figure 1. The computations have been carried out using Matlab software.

\includegraphics[height=3.3572in, width=4.0776in]{reg_abs_stab}
Figure 1 Absolute stability regions for method (12) when \(p_0=2,\, 3,\, 4.\)

For \(p_0=3\) we obtain the method

\begin{eqnarray} \label{metodaII} y_{n+1}& =& y_n+\tfrac {1}{2}h_n[f(u^n_{10})+f(u^n_{01})]\\ u^n_{10}& =& y_n+\tfrac {1}{2}\alpha _1h_n[f(u^n_{20})+f(u^n_{11})]\nonumber \\ u^n_{01}& =& y_n+\tfrac {1}{2}\alpha _2h_n[f(u^n_{11})+f(u^n_{02})]\nonumber \\ u^n_{2-jj}& =& y_n+\alpha ^{2-j}_1\alpha ^j_2h_nf(y_n),\quad j=\overline{0,\, 2}\nonumber \end{eqnarray}

which can be identified as a \(6\)–stages explicit Runge-Kutta method with the Butcher array given by

\begin{equation*} \begin{tabular}{c|c c c c c c} $\quad $ & $0$ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ \\ $\alpha ^2_1$ & $\alpha ^2_1$ & $0$ & $\quad $ & $\quad $ & $\quad $ & $\quad $ \\ $\alpha _1\alpha _2$ & $\alpha _1\alpha _2$ & $0$ & $0$ & $\quad $ & $\quad $ & $\quad $ \\ $\alpha ^2_2$ & $\alpha ^2_2$ & $0$ & $0$ & $0$ & $\quad $ & $\quad $ \\ $\alpha _1$ & $0$ & $\tfrac {\alpha _1}{2}$ & $\tfrac {\alpha _1}{2}$ & $0$ & $0$ & $\quad $ \\ $\alpha _2$ & $0$ & $0$ & $\tfrac {\alpha _2}{2}$ & $\tfrac {\alpha _2}{2}$ & $0$ & $0$ \\ \hline $\quad $ & $0$ & $0$ & $0$ & $0$ & $\tfrac {1}{2}$ & $\tfrac {1}{2}$ \end{tabular}\label{matrice_metoda6evaluari} \end{equation*}

The convergence order of the method (25) is provided in the following result.

Theorem 4

The method \((\ref{metodaII})\) has convergence order \(3\) and the coefficient of the principal local truncation error \(C_4\) has the following bound \(\| C_4\| \leq \tfrac {1}{24}ML^3.\)

Proof â–¼
Using (??) and similar arguments to those presented in the proof of Theorem \(\ref{teoremap0=2}\) we obtain

\begin{equation} L[z(x);h]=\tfrac {h^4}{24}[f'(z(x))]^3f(z(x))+O(h^5).\label{operator_metoda6evaluari} \end{equation}
26

Therefore we deduce that this method has convergence order \(3\). Also, we can determine the local truncation error as

\begin{equation} T_{n+1}=\tfrac {h^4}{24}[f'(y(x_n))]^3f(y(x_n))+O(h^5) \end{equation}
27

and the coefficient of principal local truncation error is given by

\begin{equation*} C_4=\tfrac {1}{24}[f’(y(x_n))]^3f(y(x_n)). \end{equation*}

Using (??) we obtain \(\| C_4\| {\lt}\tfrac {1}{24}ML^3\), which concludes the proof.

A simple calculus shows that the method (25) verifies the sufficient conditions for order \(3\), see [ 2 ] for details,

\begin{equation} b^TCe=\tfrac {1}{2},\quad b^TC^2e=\tfrac {1}{3},\quad b^TACe=\tfrac {1}{6} \label{conditii_ordin3} \end{equation}
28

and it represents a validation for (??).

Theorem 5

The method \((\ref{metodaII})\) has the stability function given by

\begin{equation} R(z)=1+z+\tfrac {z^2}{2}+\tfrac {z^3}{6}, \quad z\in \mathbb {C}\label{stabilitate_metII}. \end{equation}
29

Proof â–¼
Using similar arguments as in the proof of Theorem \(\ref{stabilitate_teorema_p02}\).

We observe that method (25) has the same stability function as \(3\) stages explicit Runge-Kutta methods of order \(3\). The absolute stability region defined by

\begin{equation} \mathcal{R}=\{ z\in \mathbb {C}:|1+z+\tfrac {1}{2}z^2+\tfrac {1}{6}z^3|<1\} \end{equation}
30

is plotted in Figure 1.

For \(p_0=4\) we obtain the method

\begin{eqnarray} \label{metodaIII} y_{n+1}& =& y_n+\tfrac {1}{2}h_n[f(u^n_{10})+f(u^n_{01})]\\ u^n_{i-jj}& =& y_n+\tfrac {1}{2}\alpha ^{i-j}_1\alpha ^j_2h_n[f(u^n_{i-j+1j})+f(u^n_{i-jj+1})]\nonumber \\ \quad & \quad & \quad j=\overline{0,\, i},\quad i=\overline{1,\, 2},\nonumber \\ u^n_{3-jj}& =& y_n+\alpha ^{3-j}_1\alpha ^j_2h_nf(y_n)\quad j=\overline{0,\, 3}\nonumber \end{eqnarray}

and can be identified as a \(10\)–stages explicit Runge-Kutta method with the Butcher array given by

\begin{equation*} \begin{tabular}{c| c c c c c c c c c c} $0$ & $0$ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ \\ $\alpha ^3_1$ & $\alpha ^3_1$ & $0$ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ \\ $\alpha ^2_1\alpha _2$ & $\alpha ^2_1\alpha _2$ & $0$ & $0$ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ \\ $\alpha _1\alpha ^2_2$ & $\alpha _1\alpha ^2_2$ & $0$ & $0$ & $0$ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ & $\quad $ \\ $\alpha ^3_2$ & $\alpha ^3_2$ & $0$ & $0$ & $0$ & $0$ \\ $\alpha ^2_1$ & $0$ & $\tfrac {\alpha ^2_1}{2}$ & $\tfrac {\alpha ^2_1}{2}$ & $0$ & $0$ & $0$ & $\quad $ & $\quad $ & $\quad $ & $\quad $ \\ $\alpha _1\alpha _2$ & $0$ & $0$ & $\tfrac {\alpha _1\alpha _2}{2}$ & $\tfrac {\alpha _1\alpha _2}{2}$ & $0$ & $0$ & $0$ & $\quad $ & $\quad $ & $\quad $ \\ $\alpha ^2_2$ & $0$ & $0$ & $0$ & $\tfrac {\alpha ^2_2}{2}$ & $\tfrac {\alpha ^2_2}{2}$ & $0$ & $0$ & $0$ & $\quad $ & $\quad $ \\ $\alpha _1$ & $0$ & $0$ & $0$ & $0$ & $0$ & $\tfrac {\alpha _1}{2}$ & $\tfrac {\alpha _1}{2}$ & $0$ & $0$ & $\quad $ \\ $\alpha _2$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $\tfrac {\alpha _2}{2}$ & $\tfrac {\alpha _2}{2}$ & $0$ & $0$ \\ $\quad $ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $\tfrac {1}{2}$ & $\tfrac {1}{2}$ \end{tabular}\label{matrice_10evaluari} \end{equation*}

Using (??) and similar arguments to those used above we deduce that the method (31) has convergence order \(4\). The stability function is given by

\begin{equation} R(z)=1+z+\tfrac {z^2}{2}+\tfrac {z^3}{6}+\tfrac {z^4}{24} \end{equation}
32

and we observe that it is the same as stability function for \(4\) stages explicit Runge-Kutta methods of order \(4\). The absolute stability region defined by

\begin{equation} \mathcal{R}=\{ z\in \mathbb {C}:|1+z+\tfrac {1}{2}z^2+\tfrac {1}{6}z^3+\tfrac {1}{24}z^4|<1\} \end{equation}
33

is plotted in Figure 1.

We restrict ourselves to the cases when \(p_0\) takes the values \(2\), \(3\) and \(4\). For \(p_0\geq 5\) methods obtain from (12) have a form much more complicated with a high cost of calculus and the results concerning accuracy are not so outstanding. From the next theorem we deduce that the convergence order is not grater than \(4\).

Theorem 6

For \(p_0\geq 5\) the method \((\ref{metoda-numerica})\) has maximal convergence order \(4\).

Proof â–¼
Following [ 3 ] , we consider the operator

\begin{eqnarray*} L[z(x),h]& =& z(x+h)-z(x)-\tfrac {h}{2}[f(z(x+\alpha _1h))+f(z(x+\alpha _2h))]\\ & =& z(x+h)-z(x)-\tfrac {h}{2}[z’(x+\alpha _1h)+z’(x+\alpha _2h)], \end{eqnarray*}

where \(z\) is an arbitrary function defined on \(I\), \(5\)-times differentiable at least and \(z'(x)=f(z(x))\), for all \(x\in I\).

Next, using Taylor series with respect to \(x\) and (??) we obtain

\begin{equation*} L[z(x),h]=h^5(\tfrac {1}{120}-\tfrac {1}{48}\tfrac {7}{18})z^{(5)}(x)+O(h^6). \end{equation*}

Then, from the definition of convergence order given in [ 3 ] we deduce that for \(p_0\geq 5\) method (12) has convergence order \(4\), which concludes the proof.

Note that the method (12) is a zero-stable method because it verifies root-condition. Also, since the convergence order is grater than \(2\) for all \(p_0\geq 2\) we conclude that it satisfies the consistency condition. It follows that our method represents a convergent method, see [ 3 ] for details.

4 Numerical examples

We consider the initial value problems.

Example 7

\begin{equation} \begin{cases} y’(x)=\cos ^2(y(x)),\quad x\in [0,20]\\ y(0)=0. \end{cases} \end{equation}
34

The exact solution is \(y:[0,20]\rightarrow \mathbb {R}\) given by \(y(x)=\arctan (x)\).â–¡

Example 8

\begin{equation} \begin{cases} y’(x)=\tfrac {y(x)}{4}(1-\tfrac {y(x)}{20}),\quad x\in [0,20]\\ y(0)=1. \end{cases} \end{equation}
37

The exact solution is \(y:[0,20]\to \mathbb {R}\) given by

\begin{equation*} y(x)=\tfrac {20}{1+19\exp (-x/4)}.\hfil \qed \end{equation*}

We apply the methods (17), (25) and (31) with constant steplength to determine numerical solutions for the above examples.

As a measure of the performance we consider the errors obtained as the maximum of the absolute errors on the mesh points \(x_n=nh\), \(n=\overline{0,\, N}\)

\begin{equation*} E_{\max }=\max \{ |y(x_n)-y_n|:n=0,1,\ldots ,N\} , \end{equation*}

for different values of step \(h\).

In the next tables we present results given by the proposed methods (17), (25) (31) in comparison with results given by Adams-Bashforth methods of orders \(2\), \(3\), \(4\) defined by

  • \(A-B(2):\quad y_{n+1}-y_n=\tfrac {h}{2}(3f_n-f_{n-1}),\)

  • \(A-B(3):\quad y_{n+1}-y_n=\tfrac {h}{12}(23f_n-16f_{n-1}+5f_{n-2}),\)

  • \(A-B(4):\quad y_{n+1}-y_n=\tfrac {h}{24}(55f_n-59f_{n-1}+37f_{n-2}-9f_{n-3})\).

Also, we present results given by the explicit Runge-Kutta methods with order equal with number of stages defined by the following Butcher arrays

\begin{equation*} \begin{tabular}{c c c}\begin{tabular}{c|c c} $0$ & \\ $\tfrac {1}{2}$ & $0$ \\ \hline $\quad $ & $0$ & $1$ \\ \end{tabular} & \begin{tabular}{c|c c c c} $0$ \\ $\tfrac {1}{2}$ & $\tfrac {1}{2}$ \\ $1$ & $-1$ & $2$ \\ \hline $\quad $ & $\tfrac {1}{6}$ & $\tfrac {2}{3}$ & $\tfrac {1}{6}$ \end{tabular} & \begin{tabular}{c|c c c c c} $0$ \\ $\tfrac {1}{2}$ & $\tfrac {1}{2}$ \\ $\tfrac {1}{2}$ & $0$ & $\tfrac {1}{2}$ \\ $1$ & $0$ & $0$ & $1$ \\ \hline $\quad $ & $\tfrac {1}{6}$ & $\tfrac {1}{3}$ & $\tfrac {1}{3}$ & $\tfrac {1}{6}$ \end{tabular} \\ $R-K(2)$ & $R-K(3)$ & $R-K(4)$ \end{tabular}\end{equation*}

\(h\)

\(1.0e-01\)

\(1.0e-02\)

\(1.0e-03\)

\(1.0e-04\)

\(1.0e-05\)

\((\ref{metodaI})\)

\(5.755e-04\)

\(5.415e-06\)

\(5.381e-08\)

\(5.378e-10\)

\(5.377e-12\)

\(A-B(2)\)

\(2.209e-03\)

\(2.251e-05\)

\(2.256e-07\)

\(2.257e-09\)

\(2.257e-11\)

\(R-K(2)\)

\(5.755e-04\)

\(5.415e-06\)

\(5.381e-08\)

\(5.378e-10\)

\(5.377e-12\)

\((\ref{metodaII})\)

\(1.333e-05\)

\(1.244e-08\)

\(1.235e-11\)

\(1.254e-14\)

\(2.464e-14\)

\(A-B(3)\)

\(1.109e-03\)

\(1.166e-06\)

\(1.166e-09\)

\(1.166e-12\)

\(2.464e-14\)

\(R-K(3)\)

\(2.028e-05\)

\(2.077e-08\)

\(2.082e-11\)

\(1.976e-14\)

\(2.486e-14\)

\((\ref{metodaIII})\)

\(2.202e-07\)

\(2.050e-11\)

\(2.886e-15\)

\(1.110e-14\)

\(2.464e-14\)

\(A-B(4)\)

\(1.109e-03\)

\(1.166e-06\)

\(1.166e-09\)

\(1.166e-12\)

\(2.442e-14\)

\(R-K(4)\)

\(5.357e-07\)

\(5.337e-11\)

\(5.884e-15\)

\(1.132e-14\)

\(2.486e-14\)

Table 1 \(E_{\max }\) for Example 7

\(h\)

\(1.0e-01\)

\(1.0e-02\)

\(1.0e-03\)

\(1.0e-04\)

\(1.0e-05\)

\((\ref{metodaI})\)

\(5.878e-04\)

\(5.952e-06\)

\(5.959e-08\)

\(5.960e-10\)

\(5.929e-12\)

\(A-B(2)\)

\(1.892e-03\)

\(1.907e-05\)

\(1.908e-07\)

\(1.908e-09\)

\(1.892e-11\)

\(R-K(2)\)

\(4.805e-04\)

\(4.861e-06\)

\(4.867e-08\)

\(4.866e-10\)

\(4.757e-12\)

\((\ref{metodaII})\)

\(2.725e-06\)

\( 2.764e-09\)

\(2.744e-12\)

\(4.192e-13\)

\( 6.359e-13\)

\(A-B(3)\)

\(1.387e-03\)

\(1.404e-05\)

\(1.406e-07\)

\(1.406e-09\)

\( 1.404e-11\)

\(R-K(3)\)

\(4.048e-06\)

\(4.083e-09\)

\(4.137e-12\)

\(4.209e-13\)

\(6.359e-13\)

\((\ref{metodaIII})\)

\(9.951e-09\)

\(9.912e-13\)

\(6.750e-14\)

\(4.192e-13\)

\(6.359e-13\)

\(A-B(4)\)

\(1.420e-03\)

\(1.407e-05\)

\(1.406e-07\)

\(1.406e-09\)

\(1.404e-11\)

\(R-K(4)\)

\(1.779e-08\)

\(1.788e-12\)

\(6.750e-14\)

\(4.192e-13\)

\(6.323e-13\)

Table 2 \(E_{\max }\) for Example 8

We analyze the results and we can see that for the above examples the proposed methods are at least comparable to the classical ones.

We observe that if length of the step decreases \(10\) times then the error magnitude for methods (17), (25) and (31) decreases \(10^2\), \(10^3\) and \(10^4\) times, respectively. These results represent a validation of the fact that the convergence orders for the methods (17), (25) and (31) are \(2\), \(3\) and \(4\), respectively.

Because the method is explicit we indicate to be used especially for non-stiff problems, where the requirements on the steplength imposed by stability are no restrictive than the requirements imposed by accuracy, see [ 1 ] and [ 3 ] for more details. Also, we expect that the method (12) gives better results in the case of a variable steplength.

Bibliography

1

J.C. Butcher, Numerical Methods for Ordinary Differential Equations, John Wiley & Sons, 2008.

2

M. Crouzeix and A.L. Mignot, Analyse numérique des equations différentielles, Masson, Paris, 1989.

3

J.D. Lambert, Numerical Methods for Ordinary Differential Systems-The Initial Value Problem, John Wiley & Sons, 1990.

4

F. Pătrulescu, A numerical method for the solution of an autonomous initial value problem, Carpathian J. Math., 28, no. 2, pp. 289–296, 2012.

5

I. Păvăloiu, On an approxiation formula, Rev. Anal. Numér. Théor. Approx., 26, no. 1–2, pp. 179–183, 1997. \includegraphics[scale=0.1]{ext-link.png}

6

A. Ralston, Runge-Kutta methods with minimum error bounds, Math. Comp., 16, no. 80, pp. 431–437, 1962.

7

J. F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1964.