Using a known interpolation formula we introduce a class of numerical methods for approximating the solutions of scalar initial value problems for first order differential equations, which can be identified as explicit Runge-Kutta methods. We determine bounds for the local truncation error and we also compare the convergence order and the stability region with those for explicit Runge-Kutta methods, which have convergence order equal with number of stages (i.e. with 2, 3 and 4 stages). The convergence order is only two, but our methods have a larger absolute stability region than the above mentioned methods. In the last section a numerical example is provided, and the obtained numerical approximation is compared with the corresponding exact solution.
Authors
Flavius Patrulescu (Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy)
Keywords
initial value problem; stability region; convergence order; local truncation error
Cite this paper as
F. Pătrulescu, A numerical method for the solution of an autonomous initial value problem, Carpathian J. Math. vol. 28 (2012), pp. 289-296
A numerical method for the solution of an autonomous initial value problem
Flavius Pătrulescu ^(1){ }^{1},^(1){ }^{1} Tiberiu Popoviciu Institute of Numerical AnalysisP.O. Box 68-1, 400110 Cluj-Napoca, Romania
Abstract
Using a known interpolation formula we introduce a class of numerical methods for approximating the solutions of scalar initial value problems for first order differential equations, which can be identified as explicit Runge-Kutta methods. We determine bounds for the local truncation error and we also compare the convergence order and the stability region with those for explicit Runge-Kutta methods, which have convergence order equal with number of stages (i.e. with 2,3 and 4 stages). The convergence order is only two, but our methods have a larger absolute stability region than the above mentioned methods. In the last section a numerical example is provided, and the obtained numerical approximation is compared with the corresponding exact solution.
Keywords: initial value problem, stability region, convergence order, local truncation error
1 Introduction
Consider a scalar initial value problem (IVP):
{:(1.1){[y^(')=f(x","y)","quad x in I],[y(x_(0))=y_(0)","]:}:}\left\{\begin{array}{l}
y^{\prime}=f(x, y), \quad x \in I \tag{1.1}\\
y\left(x_{0}\right)=y_{0},
\end{array}\right.
where: I subeR,y_(0)inR,f:I xxRrarrRI \subseteq \mathbb{R}, y_{0} \in \mathbb{R}, f: I \times \mathbb{R} \rightarrow \mathbb{R} and x_(0)in Ix_{0} \in I.
We assume that the function ff satisfies all requirements necessary to insure the existence of a unique solution yy on the finite interval I=[x_(0),x_(0)+T],0 < T < ooI=\left[x_{0}, x_{0}+T\right], 0<T<\infty, see [1] for details.
In this paper we give a method to approximate the solution of the above initial value problem using an approximation formula for functions given in [3] and [5].
One denotes by I_(T)I_{T} the closed interval determined by the two distinct points x_(0)x_{0}, x_(0)+T inRx_{0}+T \in \mathbb{R}, where T inRT \in \mathbb{R}. For a 3-times derivable function h:I_(T)rarrRh: I_{T} \rightarrow \mathbb{R} consider the function g:I_(T)rarrRg: I_{T} \rightarrow \mathbb{R} given by:
{:(1.2)g(x)=h(x_(0))+(x-x_(0))h^(')(x_(0)+(1)/(2)(x-x_(0)))","quad x inI_(T).:}\begin{equation*}
g(x)=h\left(x_{0}\right)+\left(x-x_{0}\right) h^{\prime}\left(x_{0}+\frac{1}{2}\left(x-x_{0}\right)\right), \quad x \in I_{T} . \tag{1.2}
\end{equation*}
In 3 is showed that function gg verifies:
{:(1.3)h^((i))(x_(0))=g^((i))(x_(0))","quad i= bar(0,2)" and "|h(x)-g(x)| <= (7)/(24)M_(3)|x-x_(0)|^(3)","quad x inI_(T)",":}\begin{equation*}
h^{(i)}\left(x_{0}\right)=g^{(i)}\left(x_{0}\right), \quad i=\overline{0,2} \text { and }|h(x)-g(x)| \leq \frac{7}{24} M_{3}\left|x-x_{0}\right|^{3}, \quad x \in I_{T}, \tag{1.3}
\end{equation*}
where M_(3)=s u p_(x inI_(T))|h^((3))(x)|M_{3}=\sup _{x \in I_{T}}\left|h^{(3)}(x)\right|.
In the next sections, for simplicity, we consider only the autonomous case, i.e. f=f(y)f=f(y), the general case can be treated in the same manner.
The paper is structured as follows. In Section 2 we describe the numerical method together with its versions corresponding to the particular cases of the parameter. In Section 3 we study the convergence of the method and in Section 4 we provide its stability analysis. Two particular cases are investigated in Section 5 and, finally, a numerical example is presented in Section 6.
2 The numerical method
We suppose that the exact solution yy of initial value problem (1.1) is 3 -times differentiable (conditions for regularity of exact solution of an initial value problem can be found in [1]). Using the results described above we deduce that there exists an approximation tilde(y)\tilde{y} given by
for all x in[x_(0),x_(0)+T]x \in\left[x_{0}, x_{0}+T\right]. From (1.3) this approximation verifies
tilde(y)^((i))(x_(0))=y^((i))(x_(0)),quad i= bar(0,2)," and "| tilde(y)(x)-y(x)| < (7)/(24)M_(3)|x-x_(0)|^(3),quad x in I,\tilde{y}^{(i)}\left(x_{0}\right)=y^{(i)}\left(x_{0}\right), \quad i=\overline{0,2}, \text { and }|\tilde{y}(x)-y(x)|<\frac{7}{24} M_{3}\left|x-x_{0}\right|^{3}, \quad x \in I,
where M_(3)=s u p_(x in I)|y^((3))(x)|M_{3}=\sup _{x \in I}\left|y^{(3)}(x)\right|.
Repeating (1.2), the unknown quantity y(x_(0)+(1)/(2)(x-x_(0)))y\left(x_{0}+\frac{1}{2}\left(x-x_{0}\right)\right) in (2.1) can be approximated in the same manner and we obtain
Continuing this procedure for y(x_(0)+(1)/(2^(2))(x-x_(0)))y\left(x_{0}+\frac{1}{2^{2}}\left(x-x_{0}\right)\right) and for the next unknown values of yy, after pp steps, we obtain:
and we can define tilde(y)(x)=u_(p)(x),x in[x_(0),x_(0)+T]\tilde{y}(x)=u_{p}(x), x \in\left[x_{0}, x_{0}+T\right].
Taking into account that (1)/(2^(p))(x-x_(0))rarr0\frac{1}{2^{p}}\left(x-x_{0}\right) \rightarrow 0, as p rarr oop \rightarrow \infty, we can choose p=p_(0)p=p_{0} such that y(x_(0))y\left(x_{0}\right) approximates y(x_(0)+(1)/(2^(p_(0)))(x-x_(0)))y\left(x_{0}+\frac{1}{2^{p_{0}}}\left(x-x_{0}\right)\right) with any given accuracy since yy is a continuous function.
and we use this recurrence relation to construct a numerical method for a scalar initial value problem. The continuous interval [x_(0),x_(0)+T]\left[x_{0}, x_{0}+T\right] is partitioned by the point set
Replacing xx by x_(1)x_{1} in (2.4) we obtain an approximation tilde(u)_(p_(0))(x_(1))\tilde{u}_{p_{0}}\left(x_{1}\right) for the exact value y(x_(1))y\left(x_{1}\right). We denote this approximation by y_(1)y_{1} and we shall apply the algorithm (2.4) for the point x_(2)x_{2} but instead of y(x_(0))=y_(0)y\left(x_{0}\right)=y_{0} we consider the value y_(1)y_{1} previously computed. By repeating this procedure we obtain for each x_(s),s= bar(1,N)x_{s}, s=\overline{1, N} an algorithm that can be described in the following way:
where h_(s)=x_(s+1)-x_(s),s=0,dots,N-1h_{s}=x_{s+1}-x_{s}, s=0, \ldots, N-1 represents the length of the step.
We have the following equivalence result.
Theorem 2.1 The method (2.8) is equivalent with a p_(0)p_{0} stages explicit Runge-Kutta method with the Butcher array given by:
Proof. Following [2], for a p_(0)p_{0} stages explicit Runge-Kutta method the Butcher array can be represented as follows:
cc
AA
b^(T)b^{T}
c A
b^(T)| $c$ | $A$ |
| :---: | :---: |
| | $b^{T}$ |
Here A=(a_(ij)),i,j= bar(1,p_(0))A=\left(a_{i j}\right), i, j=\overline{1, p_{0}} is strictly a inferior lower matrix (i.e. a_(ij)=0a_{i j}=0 for j >= ij \geq i ), b^(T)=[b_(1),dots,b_(p_(0))]b^{T}=\left[b_{1}, \ldots, b_{p_{0}}\right] and c^(T)=[c_(1),dots,c_(p_(0))],0 <= c_(i) < 1c^{T}=\left[c_{1}, \ldots, c_{p_{0}}\right], 0 \leq c_{i}<1. The values k_(i),i= bar(1,p_(0))k_{i}, i=\overline{1, p_{0}}, for the intermediate points x_(s)+c_(i)h_(s)x_{s}+c_{i} h_{s} between x_(s)x_{s} and x_(s+1),s= bar(0,N-1)x_{s+1}, s=\overline{0, N-1} are defined as follows:
It is easy to see that (2.9) provides a rule of the form (2.8), which concludes the proof.
Next, we present some particular cases, obtained for different values of the parameter p_(0)p_{0}. For p_(0)=1p_{0}=1 we obtain the Euler forward method given by
In the next sections, for the methods (2.10) and (2.11), we study the local truncation error, stability, consistency, convergence order and we compare them with others known numerical methods.
3 Local Truncation error
In this section we study the local truncation error and convergence order for the method (2.8) when p_(0) >= 3p_{0} \geq 3, and we determine the bounds for the coefficient of principal local truncation error, as well.
As in [4], we suppose that
{:(3.1)||f|| < M" and "||f^((j))|| < (L^(j))/(M^(j-1))" on "[x_(0),x_(0)+T]",":}\begin{equation*}
\|f\|<M \text { and }\left\|f^{(j)}\right\|<\frac{L^{j}}{M^{j-1}} \text { on }\left[x_{0}, x_{0}+T\right], \tag{3.1}
\end{equation*}
where ||f||=s u p{|f(t)|:t in I}\|f\|=\sup \{|\mathrm{f}(t)|: t \in I\} and M,LM, L are positive real numbers.
The convergence order of the method is provided in the following result.
Theorem 3.1 The method (2.8) has convergence order 2 and the coefficient of principal local truncation error C_(3)C_{3} has the following bound
{:(3.2)||C_(3)|| <= (1)/(12)ML^(2):}\begin{equation*}
\left\|C_{3}\right\| \leq \frac{1}{12} M L^{2} \tag{3.2}
\end{equation*}
Proof. In order to obtain the local truncation error of the method (2.8), following 2] we consider the operator
{:(3.3)L[z(x)","h]=z(x+h)-z(x)-hf(z(x)+(1)/(2)hf(z(x)+dots+(1)/(2^(p_(0)-1))hf(z(x))dots):}:}\begin{equation*}
\mathcal{L}[z(x), h]=z(x+h)-z(x)-h f\left(z(x)+\frac{1}{2} h f\left(z(x)+\ldots+\frac{1}{2^{p_{0}-1}} h f(z(x)) \ldots\right)\right. \tag{3.3}
\end{equation*}
where zz is an arbitrary function defined on [x_(0),x_(0)+T],3\left[x_{0}, x_{0}+T\right], 3-times differentiable at least and z^(')(x)=f(z(x)),x in[x_(0),x_(0)+T]z^{\prime}(x)=f(z(x)), x \in\left[x_{0}, x_{0}+T\right]. For p_(0) >= 3p_{0} \geq 3, using the Taylor series with respect xx we obtain
Using the definition for the convergence order given in [2] we deduce that the method has second-order of accuracy. Also, substituting zz by the exact solution y,xy, x by x_(s)x_{s}, and supposing the localizing assumption y_(i)=y(x_(i)),i= bar(1,s)y_{i}=y\left(x_{i}\right), i=\overline{1, s} then the local truncation error of the method (see [2]) can be written as
which concludes the proof.
Note that the method defined above is a zero-stable method because it verifies root-condition. Also, since the convergence order is 2 we conclude that it satisfies the consistency condition. It follows that our method represents a convergent method, see [2] for details.
4 Stability analysis
In this section we define the stability function and the absolute-stability region for the method (2.8), when p_(0) >= 3p_{0} \geq 3. Our main result is the following.
Theorem 4.1 The method (2.8) has the stability function given by:
Denoting q=h lambdaq=h \lambda the stability function is obtained as: R(q)=1+sum_(k=1)^(p_(0))(1)/(2^((k(k-1))/(2)))q^(k)R(q)=1+\sum_{k=1}^{p_{0}} \frac{1}{2^{\frac{k(k-1)}{2}}} q^{k}.
The absolute-stability region (see [2]) is given by
We study this region in the next section for the methods (2.10) and (2.11).
5 Method (2.8) when p_(0)=3p_{0}=3 and p_(0)=4p_{0}=4
We restrict ourselves to methods (2.10) and (2.11), which will be studied in the following section. For p_(0) >= 5p_{0} \geq 5 methods obtained from (2.8) have a form much more complicated with a high cost of calculus and the results concerning the accuracy are not so outstanding, because from (3.4) we deduce that the convergence order is only 2.
The method (2.10), obtained for p_(0)=3p_{0}=3 and defined by
requires three evaluations of the function ff for each mesh point.
From (3.4) we deduce that the convergence order for this method is 2 , unlike the 3-stages explicit Runge-Kutta methods of order 3 which also require three function evaluations.
But this method has an advantage concerning absolute stability region. From (4.1) we deduce that the absolute stability function for (2.10) is defined by
Using the scanning technique (see [2]) the absolute stability region is plotted, using Matlab software, in Figure 1 (a) with continuous line. We know that the absolute stability function for 3 stages explicit Runge-Kutta methods of order 3 (see [2]) is defined by
and the absolute stability region is also plotted, using the same technique as above, in Figure 1 (a) with dashed line. We observe that the absolute stability region for method (2.10) is larger than absolute stability region for 3 stages explicit Runge-Kutta methods of order 3. The method (2.11), obtained for p_(0)=4p_{0}=4 and defined by
requires four evaluations of the function ff for each mesh point.
Figure 1: Absolute stability regions
From (3.4) we deduce that the convergence order for this method is 2 , and concerning the accuracy it does not compare with 4 stages explicit Runge-Kutta methods of order 4 which also require four function evaluations.
But this method has an advantage concerning absolute stability region. From (4.1) we deduce that the absolute stability function for (2.11) is defined by
and the absolute stability region is plotted in Figure 1(b) with dashed line. In Figure 1 (b) is also plotted with continuous line the absolute stability region for method (2.10) and we observe that the absolute stability region for method (2.11) is larger than absolute stability region for method (2.10).
We know that the absolute stability function for 4 stages explicit Runge-Kutta methods of order 4 (see [2]) is defined by
and the absolute stability region is plotted in Figure 2 (a) with continuous line. In Figure 2 (a) is also plotted the absolute stability region for the method (2.11) with dashed line and we observe that the absolute stability region for method (2.11) is larger than absolute stability region for 4 stages explicit Runge-Kutta methods of order 4.
6 Numerical example
To test the performance of the proposed methods, (2.10) and (2.11), we consider the autonomous initial value problem:
{:(6.1){[y^(')(x)=cos^(2)(y(x))","quad x in[0","20]],[y(0)=0]:}:}\left\{\begin{array}{l}
y^{\prime}(x)=\cos ^{2}(y(x)), \quad x \in[0,20] \tag{6.1}\\
y(0)=0
\end{array}\right.
The exact solution of the problem is y:[0,20]rarrR,y(x)=arctan(x)y:[0,20] \rightarrow \mathbb{R}, y(x)=\arctan (x) and is plotted, for the interval [0,5][0,5], in Figure 2(b)2(b) with continuous line. The numerical solutions,
Figure 2: (a) Absolute stability regions (b) Exact solution and numerical solutions for h=0.1h=0.1
obtained with the methods (2.10) and (2.11), for the fixed step size h=0.1h=0.1, are also plotted, using Matlab software, in Figure 2 (b) with circle marker and star marker respectively. We observe a very good agreement between exact solution and numerical solutions.
In Table 1 are presented the results for the method (2.10) and (2.11) and Midpoint rule for different values of hh. The errors have been obtained as the maximum of the absolute errors on the mesh points x_(s)=sh,s= bar(1,N)x_{s}=s h, s=\overline{1, N} :
We note that when length of the step decreases 10 times then the error magnitude decreases 100 times. This result represents a validation of the fact that the convergence order for methods (2.10) and (2.11) is 2.
References
[1] Crouzeix, M., Mignot, A.L., Analyse numérique des équations différentielles, Masson, Paris, 1989.
[2] Lambert, J. D., Numerical Methods for Ordinary Differential Systems-The Initial Value Problem, John Wiley&Sons, 1990.
[3] Păvăloiu, I., On an approximation formula, Rev. Anal. Numér. Théor. Approx., 26 (1997), ns. 1-2, pp. 179-183.
[4] Ralston, A., Runge-Kutta methods with minimum error bounds, Math. Comp.,16 (1962), no. 80, pp. 431-437.
[5] Traub, J.F., Iterative Methods for the Solution of Equations, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1964.