In this paper we introduce a class of explicit numerical methods for approximating the solutions of scalar initial value problems for first order differential equations, using a nonlinear interpolation formula. We show that the methods generated in this way can be identified as explicit Runge-Kutta methods and we analyze some particular cases. Finally, numerical examples are provided.
Authors
Flavius Patrulescu
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy
Keywords
initial value problem; stability region; convergence order; local truncation error
[1] C. Butcher, Numerical Methods for Ordinary Differential Equations, John Willey & Sons, Chichester (2003).
[2] Crouzeix, A.L. Mignot, Analyse numerique des equations differentielles, Masson, Paris (1989).
[3] D. Lambert, Numerical Methods for Ordinary Differential Systems. The Initial Value Problem, John Wiley & Sons, Chichester (1990).
[4] Patrulescu, A Numerical Method for the Solution of an Autonomous Initial Value Problem, Carpathian J. Math., 28, no. 2 (2012), 289-296.
[5] Pavaloiu, On an approximation formula, Rev. Anal. Numer. Theor. Approx., 26, no. 1-2 (1997), 179-183.
[6] Ralston, Runge-Kutta Methods with Minimum Error Bounds, Math. Comp.,16, no. 80 (1962), 431-437.
[7] F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, Inc., Englewood Cliffs, N.J. (1964).
Paper (preprint) in HTML form
2012_Patrulescu_ANTA_Class_numerical_methods_IVP
A class of numerical methods for autonomous initial value problems
F. PătrulescuTiberiu Popoviciu Institute of Numerical AnalysisP.O. Box 68-1, 400110 Cluj-Napoca, Romania, fpatrulescu@ictp.acad.ro
Abstract
In this paper we introduce a class of explicit numerical methods for approximating the solutions of scalar initial value problems for first order differential equations, using a nonlinear interpolation formula. We show that the methods generated in this way can be identified as explicit Runge-Kutta methods and we analyze some particular cases. Finally, numerical examples are provided.
2010 Mathematics Subject Classification : 65L05, 65L06, 65L20
Keywords: initial value problem, stability region, convergence order, local truncation error
1 Introduction
Consider a scalar initial value problem (IVP):
{:(1.1){[y^(')=f(x","y)","quad x in I],[y(x_(0))=y_(0)","]:}:}\left\{\begin{array}{l}
y^{\prime}=f(x, y), \quad x \in I \tag{1.1}\\
y\left(x_{0}\right)=y_{0},
\end{array}\right.
where: I subeR,y_(0)inR,f:I xxRrarrRI \subseteq \mathbb{R}, y_{0} \in \mathbb{R}, f: I \times \mathbb{R} \rightarrow \mathbb{R} and x_(0)in Ix_{0} \in I. We assume that I=[x_(0),x_(0)+T]I=\left[x_{0}, x_{0}+T\right], 0 < T < oo0<T<\infty and the function ff satisfies all requirements necessary to insure the existence of a unique solution yy on the bounded interval II, see [1], [2] for details.
In this paper we present a numerical method to approximate the solution of IVP (1.1) using a particular case of a nonlinear interpolation formula analyzed in [5] and [7]. Another particular case was utilized in [4].
For a 5-times differentiable function h:I rarrRh: I \rightarrow \mathbb{R} consider the function g:I rarrRg: I \rightarrow \mathbb{R} given by
{:(1.2)g(x)=h(x_(0))+(1)/(2)(x-x_(0))[h^(')(x_(0)+alpha_(1)(x-x_(0)))+h^(')(x_(0)+alpha_(2)(x-x_(0)))]","quad x in I",":}\begin{equation*}
g(x)=h\left(x_{0}\right)+\frac{1}{2}\left(x-x_{0}\right)\left[h^{\prime}\left(x_{0}+\alpha_{1}\left(x-x_{0}\right)\right)+h^{\prime}\left(x_{0}+\alpha_{2}\left(x-x_{0}\right)\right)\right], \quad x \in I, \tag{1.2}
\end{equation*}
where alpha_(1)=(3-sqrt3)/(6)\alpha_{1}=\frac{3-\sqrt{3}}{6} and alpha_(2)=(3+sqrt3)/(6)\alpha_{2}=\frac{3+\sqrt{3}}{6}.
It is showed in [5] that the function gg verifies
{:(1.3)h^((i))(x_(0))=g^((i))(x_(0))","i= bar(0,4)" and "|h(x)-g(x)| <= (71)/(4320)M_(5)|x-x_(0)|^(5)","x in I:}\begin{equation*}
h^{(i)}\left(x_{0}\right)=g^{(i)}\left(x_{0}\right), i=\overline{0,4} \text { and }|h(x)-g(x)| \leq \frac{71}{4320} M_{5}\left|x-x_{0}\right|^{5}, x \in I \tag{1.3}
\end{equation*}
where M_(5)=s u p_(x in I)|h^((5))(x)|M_{5}=\sup _{x \in I}\left|h^{(5)}(x)\right|. Also, we observe that the coefficients alpha_(1),alpha_(2)\alpha_{1}, \alpha_{2} satisfy the equalities
In the next sections, for simplicity, we consider only the autonomous case, i.e. f=f(y)f= f(y). The general case can be treated using similar arguments.
The paper is structured as follows. In Section 2 we derive the numerical method and in Section 3 we analyze its versions corresponding to the particular cases of the parameter. In Section 4 numerical examples are provided to exemplify the results obtained in the previous sections.
2 The numerical method
The interval I=[x_(0),x_(0)+T]I=\left[x_{0}, x_{0}+T\right] is partitioned by the point set {x_(n)∣n= bar(0,N),N inN}\left\{x_{n} \mid n=\overline{0, N}, N \in \mathbb{N}\right\}, where x_(n)x_{n} are given by a prescribed rule, and we denote by y_(n)y_{n} a numerical approximation to the exact solution yy of (1.1) at x_(n)x_{n}.
We suppose that the exact solution yy of initial value problem (1.1) is 5 -times differentiable at least. Conditions for regularity of exact solution of an initial value problem can be found in 2 .
Using the results presented above we deduce that there exists an approximation tilde(y)\tilde{y} of yy given by
for all x in Ix \in I. From (1.3) we deduce that this approximation verifies
tilde(y)^((i))(x_(0))=y^((i))(x_(0)),i= bar(0,4)" and "| tilde(y)(x)-y(x)| < (71)/(4320)M_(5)|x-x_(0)|^(5),x in I\tilde{y}^{(i)}\left(x_{0}\right)=y^{(i)}\left(x_{0}\right), i=\overline{0,4} \text { and }|\tilde{y}(x)-y(x)|<\frac{71}{4320} M_{5}\left|x-x_{0}\right|^{5}, x \in I
where M_(5)=s u p_(x in I)|y^((5))(x)|M_{5}=\sup _{x \in I}\left|y^{(5)}(x)\right|.
The unknown quantities y(x_(0)+alpha_(i)(x-x_(0))),i=1,2y\left(x_{0}+\alpha_{i}\left(x-x_{0}\right)\right), i=1,2, in (2.1) can be approximated in the same manner and we continue this approximation procedure for the next unknown values y(x_(0)+alpha_(1)^(p)alpha_(2)^(q)(x-x_(0))),p,q=1,2,dotsy\left(x_{0}+\alpha_{1}^{p} \alpha_{2}^{q}\left(x-x_{0}\right)\right), p, q=1,2, \ldots.
the algorithm described above can be written for the firsts pp steps in the following compact form
{:[(2.2)u_(00)(x)=y(x_(0))+(1)/(2)(x-x_(0))[f(u_(10)(x))+f(u_(01)(x))]],[u_(i-jj)(x)=y(x_(0))+(1)/(2)alpha_(1)^(i-j)alpha_(2)^(j)(x-x_(0))[f(u_(i-j+1j)(x))+f(u_(i-jj+1)(x))]],[j=0","dots","i","quad i=1","dots","p]:}\begin{gather*}
u_{00}(x)=y\left(x_{0}\right)+\frac{1}{2}\left(x-x_{0}\right)\left[f\left(u_{10}(x)\right)+f\left(u_{01}(x)\right)\right] \tag{2.2}\\
u_{i-j j}(x)=y\left(x_{0}\right)+\frac{1}{2} \alpha_{1}^{i-j} \alpha_{2}^{j}\left(x-x_{0}\right)\left[f\left(u_{i-j+1 j}(x)\right)+f\left(u_{i-j j+1}(x)\right)\right] \\
j=0, \ldots, i, \quad i=1, \ldots, p
\end{gather*}
and we can define tilde(y)(x)=u_(00)(x),x in I\tilde{y}(x)=u_{00}(x), x \in I.
Taking into account that
(1)/(2)alpha_(1)^(p-j)alpha_(2)^(j)(x-x_(0))rarr0,j= bar(0,p)," as "p rarr oo\frac{1}{2} \alpha_{1}^{p-j} \alpha_{2}^{j}\left(x-x_{0}\right) \rightarrow 0, j=\overline{0, p}, \text { as } p \rightarrow \infty
we can choose p=p_(0)p=p_{0} such that y(x_(0))y\left(x_{0}\right) approximates u_(p_(0)0),u_(p_(0)-10),dots,u_(0p_(0))u_{p_{0} 0}, u_{p_{0}-10}, \ldots, u_{0 p_{0}} with any given accuracy, since yy is a continuous function. From (2.2) we obtain
We use this recurrence relation to construct a numerical method for the scalar initial value problem (1.1). Replacing xx by x_(1)x_{1} in (2.3) we obtain an approximation tilde(u)_(00)(x_(1))\tilde{u}_{00}\left(x_{1}\right) for the exact value y(x_(1))y\left(x_{1}\right). We denote this approximation by y_(1)y_{1} and we apply again the algorithm (2.3) for x_(2)x_{2}, but, instead of y(x_(0))=y_(0)y\left(x_{0}\right)=y_{0}, we consider the value y_(1)y_{1} previously computed. We repeat this procedure for each x_(n)x_{n}.
We have the following equivalence result.
Theorem 2.1 The method (2.4) can be identified as an (p_(0)(p_(0)+1))/(2)\frac{p_{0}\left(p_{0}+1\right)}{2}-stages explicit RungeKutta method with the Butcher array given by
{:(3.2)||f|| < M" and "||f^((j))|| < (L^(j))/(M^(j-1)):}\begin{equation*}
\|f\|<M \text { and }\left\|f^{(j)}\right\|<\frac{L^{j}}{M^{j-1}} \tag{3.2}
\end{equation*}
where ||f||=s u p{|f(t)|:t in I}\|f\|=\sup \{|\mathrm{f}(t)|: t \in I\} and M,LM, L are positive real numbers.
The convergence order of the method (3.1) is provided in the following result.
Theorem 3.1 The method (3.1) has convergence order 2 and the coefficient of principal local truncation error C_(3)C_{3} has the following bound ||C_(3)|| <= (1)/(6)ML^(2)\left\|C_{3}\right\| \leq \frac{1}{6} M L^{2}.
Proof. Following [3], in order to obtain the local truncation error of the method (3.1) we consider the operator
where zz is an arbitrary function defined on II, sufficiently differentiable and z^(')(x)=f(z(x))z^{\prime}(x)= f(z(x)), for all x in Ix \in I.
Expanding in Taylor series with respect to xx and using (1.4) we obtain
Then, using the definition for the convergence order given in 3] we deduce that the method has second-order of accuracy. Also, substituting zz by the exact solution yy, xx by x_(n)x_{n} and supposing localizing assumption y_(i)=y(x_(i)),i= bar(1,n)y_{i}=y\left(x_{i}\right), i=\overline{1, n}, the local truncation error can be written as
which concludes the proof.
Following [2], for the Runge-Kutta methods in addition of the row-sum condition, Ae=CeA e=C e, and consistency condition, b^(T)e=1b^{T} e=1, the sufficient conditions for order 2 are given by
{:(3.7)b^(T)Ce=(1)/(2)quadb^(T)Ae=(1)/(2):}\begin{equation*}
b^{T} C e=\frac{1}{2} \quad b^{T} A e=\frac{1}{2} \tag{3.7}
\end{equation*}
where e=[1,dots,1]e=[1, \ldots, 1] and C=diag(c)C=\operatorname{diag}(c). A simple calculus shows that the method (3.1) verifies conditions (3.7) and it represents a validation for (3.4).
The stability function and absolute stability region of the method (3.1) are provided in the following result.
Theorem 3.2 The method (3.1) has the stability function given by
{:(3.8)R(z)=1+z+(z^(2))/(2)","quad z inC:}\begin{equation*}
R(z)=1+z+\frac{z^{2}}{2}, \quad z \in \mathbb{C} \tag{3.8}
\end{equation*}
Proof. Following [3], we apply the method (3.1) to the scalar test equation
y^(')=lambda y,quad lambda inC,quad Re lambda < 0y^{\prime}=\lambda y, \quad \lambda \in \mathbb{C}, \quad \operatorname{Re} \lambda<0
Denoting z=lambdah_(n)z=\lambda h_{n} and using (1.4) we obtain the stability function
R(z)=1+z+(z^(2))/(2)R(z)=1+z+\frac{z^{2}}{2}
Note that this function is the same as the stability function for 2 stages explicit Runge-Kutta methods of order 2.
The absolute stability region is given by
The convergence order of the method (3.10) is provided in the following result.
Theorem 3.3 The method (3.10) has convergence order 3 and the coefficient of the principal local truncation error C_(4)C_{4} has the following bound ||C_(4)|| <= (1)/(24)ML^(3)\left\|C_{4}\right\| \leq \frac{1}{24} M L^{3}.
Proof. Using (1.4) and similar arguments to those presented in the proof of Theorem 3.1 we obtain
Using (3.2) we obtain ||C_(4)|| < (1)/(24)ML^(3)\left\|C_{4}\right\|<\frac{1}{24} M L^{3}, which concludes the proof.
A simple calculus shows that the method (3.10) verifies the sufficient conditions for order 3, see [2] for details,
{:(3.13)b^(T)Ce=(1)/(2)","quadb^(T)C^(2)e=(1)/(3)","quadb^(T)ACe=(1)/(6):}\begin{equation*}
b^{T} C e=\frac{1}{2}, \quad b^{T} C^{2} e=\frac{1}{3}, \quad b^{T} A C e=\frac{1}{6} \tag{3.13}
\end{equation*}
and it represents a validation for (3.11).
Theorem 3.4 The method (3.10) has the stability function given by
{:(3.14)R(z)=1+z+(z^(2))/(2)+(z^(3))/(6)","quad z inC.:}\begin{equation*}
R(z)=1+z+\frac{z^{2}}{2}+\frac{z^{3}}{6}, \quad z \in \mathbb{C} . \tag{3.14}
\end{equation*}
Proof. Using similar arguments as in the proof of Theorem 3.2.
We observe that method (3.10) has the same stability function as 3 stages explicit Runge-Kutta methods of order 3. The absolute stability region defined by
is plotted using scanning technique in Figure 1.
We restrict ourselves to the cases when p_(0)p_{0} takes the values 2,3 and 4 . For p_(0) >= 5p_{0} \geq 5 methods obtain from (2.4) have a form much more complicated with a high cost of calculus and the results concerning accuracy are not so outstanding. From the next theorem we deduce that the convergence order is not grater than 4 .
Theorem 3.5 For p_(0) >= 5p_{0} \geq 5 the method (2.4) has maximal convergence order 4 .
Proof. Following [3], we consider the operator
Then, from the definition of convergence order given in [3] we deduce that for p_(0) >= 5p_{0} \geq 5 method (2.4) has convergence order 4 , which concludes the proof.
Note that the method (2.4)(2.4) is a zero-stable method because it verifies root-condition. Also, since the convergence order is grater than 2 for all p_(0) >= 2p_{0} \geq 2 we conclude that it satisfies the consistency condition. It follows that our method represents a convergent method, see 3 for details.
4 Numerical examples
We consider the initial value problems.
Example 1
{:(4.1){[y^(')(x)=cos^(2)(y(x))","quad x in[0","20]],[y(0)=0]:}:}\left\{\begin{array}{l}
y^{\prime}(x)=\cos ^{2}(y(x)), \quad x \in[0,20] \tag{4.1}\\
y(0)=0
\end{array}\right.
The exact solution is y:[0,20]rarrRy:[0,20] \rightarrow \mathbb{R} given by y(x)=arctan(x)y(x)=\arctan (x).
Example 2
{:(4.2){[y^(')(x)=(y(x))/(4)(1-(y(x))/(20))","quad x in[0","20]],[y(0)=1]:}:}\left\{\begin{array}{l}
y^{\prime}(x)=\frac{y(x)}{4}\left(1-\frac{y(x)}{20}\right), \quad x \in[0,20] \tag{4.2}\\
y(0)=1
\end{array}\right.
The exact solution is y:[0,20]rarrRy:[0,20] \rightarrow \mathbb{R} given by
We apply the methods (3.1), (3.10) and (3.16) with constant steplength to determine numerical solutions for the above examples.
As a measure of the performance we consider the errors obtained as the maximum of the absolute errors on the mesh points x_(n)=nh,n= bar(0,N)x_{n}=n h, n=\overline{0, N}
for different values of step hh.
In the next tables we present results given by the proposed methods (3.1), (3.10) (3.16) in comparison with results given by Adams-Bashforth methods of orders 2, 3, 4 defined by
We analyze the results and we can see that for the above examples the proposed methods are at least comparable to the classical ones.
We observe that if length of the step decreases 10 times then the error magnitude for methods (3.1), (3.10) and (3.16) decreases 10^(2),10^(3)10^{2}, 10^{3} and 10^(4)10^{4} times, respectively. These results represent a validation of the fact that the convergence orders for the methods (3.1), (3.10) and (3.16) are 2,3 and 4 , respectively.
Because the method is explicit we indicate to be used especially for non-stiff problems, where the requirements on the steplength imposed by stability are no restrictive than the requirements imposed by accuracy, see [1] and [3] for more details. Also, we expect that the method (2.4) gives better results in the case of a variable steplength.
References
[1] Butcher, J. C., Numerical Methods for Ordinary Differential Equations, John Wiley & Sons, 2008.
[2] Crouzeix, M., and Mignot A. L., Analyse numérique des equations différentielles, Masson, Paris, 1989.
[3] Lambert, J. D., Numerical Methods for Ordinary Differential Systems-The Initial Value Problem, John Wiley & Sons, 1990.
[4] Pătrulescu F., A Numerical Method for the Solution of an Autonomous Initial Value Problem, Carpathian J. Math., 28, no. 2, pp. 289-296, 2012.
[5] Păvăloiu, I., On an Approxiation Formula, Rev. Anal. Numér. Théor. Approx., 26, no. 1-2, pp. 179-183, 1997.
[6] Ralston, A., Runge-Kutta Methods with Minimum Error Bounds, Math. Comp., 16, no. 80, pp. 431-437, 1962.
[7] Traub J. F., Iterative Methods for the Solution of Equations, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1964.