On the numerical Picard iterations with collocations for the Initial Value Problem
January 21, 2018. Accepted: June 17, 2019. Published online: October 10, 2019.
\(^\ast \)e-mail: scheiber@unitbv.ro.
Some variants of the numerical Picard iterations method are presented to solve an IVP for an ordinary differential system. The term numerical emphasizes that a numerical solution is computed. The method consists in replacing the right hand side of the differential system by Lagrange interpolation polynomials followed by successive approximations. In the case when the number of interpolation point is fixed a convergence result is given. For stiff problems, starting from a stabilization principle, it is given a variant of the Picard iteration method.
Finally, some numerical experiments are reported.
MSC. 65L05, 65L60, 65L04.
Keywords. Picard iterations, initial value problem, collocation.
1 Introduction
This paper presents variants of the Picard iterations to solve an initial value problem (IVP) for ordinary differential equations. On subintervals the right hand side of the differential system is replaced by Lagrange interpolation polynomials and then successive approximations are used. The interpolation nodes are the images of a set of reference points. The number of these reference points can be fixed or variable, i.e. increasing in number [ 8 ] .
When the number of reference nodes is fixed the approximations of the solution of the IVP are computed by collocations. A convergence result is given. This case appears in [ 7 , p. 211 ] . In [ 3 ] the spectral deferred correction is defined adding a term to the iteration formula and the convergence of that method is proved.
If the number of reference points increases then the values of the unknown function are determined iteratively [ 8 ] .
The idea to replace the right hand side of a differential equation with an interpolation polynomial occurs in the multi-step methods, but instead of using the points of the main mesh a reference set of points is used. The usage of an iterative process to obtain some convergence is present in predictor-corrector methods.
We use the terminology numerical Picard iterations to emphasize that the method builds a numerical solution. For an IVP the usual Picard iterations are exemplified with Computer Algebra code in [ 12 ] .
For stiff problems, starting from a stabilization principle, [ 4 ] , [ 2 ] , we derived a variant of the Picard iteration method.
There is another approach to the numerical Picard iterations for an IVP, where the approximations are a linear form of Chebyshev polynomials [ 6 ] , [ 1 ] , [ 3 ] .
Sometimes to verify a numerical computation, it is a practical rule to use two different tools or methods. The presented methods offer an alternative to solve an IVP.
The paper is organized as follows. After introducing the numerical Picard iterations method in Section 2, two cases are presented in the next sections. In Section 3 the Picard iteration method is studied when the reference set contains a fixed number of points, while in Section 4 the Picard iteration method is considered with an increasing in number of points of the reference set. In Section 5 the case of a stiff problem is treated. In the last section some results of our computational experiments are presented.
2 Numerical Picard iterations
Let the IVP be given by
where the function \(f:[x_0,x_f]\times \mathbb {R}^N\rightarrow \mathbb {R}^N\) has the components \(f=(f^1,\ldots ,f^N)\).
In \(\mathbb {R}^N\) for \(y=(y^1,\ldots ,y^N)\) we shall use the norm \(\| y\| =\max _{1\le j\le N}|y^j|.\)
We assume that \(f\) is Lipschitz continuous, i.e. there exists \(L{\gt}0\) such that
and consequently
where \(\tilde{L}=NL.\)
The IVP (2.1)-(2.2) may be reformulated as the integral equation
Within these hypotheses the problem (2.1)-(2.2) or (2.3) has a unique solution. This solution may be obtained with the Picard iterations
for \(x\in [x_0,x_f].\) The sequence \((y^{(n)}(x))_{k\in \mathbb {N}}\) converges uniformly in \([x_0,x_f]\) to the solution of IVP.
Let \(M\) be a positive integer, \(h=\tfrac {x_f-x_0}{M}\) and the mesh be defined as \(x_i=x_0+ih,\ i\in \{ 0,1,\ldots ,M\} .\) The numerical solution is given by the sequence \(u_h=(u_0,u_1,\ldots ,u_M),\) where each \(u_i=u(x_i)\) is an approximation of \(y(x_i).\)
If \(u_i\) was computed, on the interval \([x_i,x_{i+1}]\) the function \(f(s,y(s))\) under the integral in
will be replaced by a Lagrange interpolation polynomial
The interpolation nodes \(x_i\le x_{i,1}{\lt}x_{i,2}{\lt}\ldots {\lt}x_{i,m}\le x_{i+1}\) are fixed by a certain rule. The used notation states the interpolation constraints
From (2.5) we deduce
where \((l_j)_{1\le j\le m}\) are the Lagrange fundamental polynomials
3 Picard iterations with a fixed reference set
In (2.6) the values
are unknown. To compute these vectors the collocation method will be used.
Choosing \(x:=x_{i,k}\) in (2.6) we get
The relations (3.8) form a nonlinear system with the unknowns \(u(x_{i,1}),\ldots ,u(x_{i,m}) \in \underbrace{\mathbb {R}^N\times \ldots \times \mathbb {R}^N}_m \simeq \mathbb {R}^{mN}.\)
In order to simplify and provides a unitary approach to the computation of the integrals from (3.8) we fix the nodes \(\xi _1{\lt}\xi _2{\lt}\ldots {\lt}\xi _m\) within an interval \([a,b].\) We call these nodes the reference interpolation nodes. The function
maps the interval \([a,b]\) into \([x_i,x_{i+1}].\) For any \(i\in \{ 0,1,\ldots ,M-1\} \) the nodes \(x_{i,j}\) will be defined as
If \(s=\varphi _i(\xi )\) then
and
Denoting
the nonlinear system (3.8) becomes
In order to prove the existence of a solution of the nonlinear system we shall use a simplified notation \(u(x_{i,k})=u_k,\ k\in \{ 1,2,\ldots ,m\} .\) Then the system (3.9) is written as
The operator
is defined by
The used norm in \(\underbrace{\mathbb {R}^N\times \ldots \times \mathbb {R}^N}_m\) will be
If \(u=(u_1,\ldots ,u_m)\) and \(v=(v_1,\ldots ,v_m)\) then following equality is valid
Then
and
If \(\omega =\max _{1\le j\le m}\sum _{k=1}^m|w_{j,k}|\) the above inequality gives
Thus, if \(h\omega \tilde{L}{\lt}1\) then \(\Phi \) is a contraction. Following theorem is a consequence of the above:
For \(h\) small enough \((h{\lt}\frac{1}{\omega L})\) the nonlinear system 3.9 has a unique solution.
In the hypothesis of the above theorem, the nonlinear system (3.9) may be solved using the successive approximation method
for \(k\in \{ 1,2,\ldots ,m\} .\) Because \(\Phi \) is contraction, (3.11), the sequences
will converge to the solution of the system (3.9).
The iterative relations (3.12) can be written in matrix form
where \(f_{i,j}=f(x_{i,j},u^{(n)}_{i,j}), \ j\in \{ 1,\ldots ,m\} .\) Denoting \(u^{(n)}_i=(u^{(n)}_{i,j})_{1\le j\le m}\) the iterations stop when the following condition is fulfilled \(\| u^{(n)}_i-u^{(n-1)}_i\| {\lt} \varepsilon ,\) where \(\varepsilon {\gt}0\) is a tolerance. The initial approximations are chosen as \(u^{(0)}_{i,j}=u(x_i)\) for any \(j\in \{ 1,2,\ldots ,m\} .\)
This method to solve the nonlinear system (3.9) leads to an approximation to the solution of the IVP in the most right node which may differ from \(x_{i+1}\). We point out two variants of the computations:
We change the initial mesh such that \(x_{i+1}\) will be the most right node (\(x_{i+1}=\varphi _i(\xi _m)\)) and the computation continue in the interval \([x_{i+1},x_{i+1}+h].\) In this case we have
\[ u_{i+1}\stackrel{\mathrm{def}}{=}u(x_{i+1})=u^{(n)}_{i,m}. \]In (2.5) we set \(x:=x_{i+1}=\varphi _i(b)\) and
\[ u_{i+1}\stackrel{\mathrm{def}}{=}u(x_{i+1})=u(x_i)+\tfrac {h}{b-a}\sum _{j=1}^m\bigg(\int _a^b\tilde{l}_j(\xi )\mathrm{d}\xi \bigg)f(x_{i,j},u^{(n)}_{i,j}). \]In this way \(m\) new integrals must be computed additionally.
With the new notations we have \(u_0\stackrel{\mathrm{def}}{=}u(x_0)=y^0.\)
The coefficients \(w_{j,k}\) do not depend on the computation interval. We highlight some cases when these coefficients may be easily computed.
Some particular cases
Equidistant nodes. If \(\xi _j=\tfrac {j-1}{m-1},\ j\in \{ 1,2,\ldots ,m\} ,\) then
\[ w_{j,k}=\int _0^{\xi _k}\tilde{l}_j(\xi )\mathrm{d}\xi =\tfrac {(-1)^{m-j}}{(j-1)!(m-j)!} \int _0^{\frac{k-1}{m-1}}\prod _{\stackrel{\mu =0}{\mu \not=j-1}}^{m-1}\left((m-1)\xi -\mu \right)\mathrm{d}\xi . \]The following Mathematica code computes these coefficients:
\(\pmb {\text{Wcoeff}[\text{j$\_ $},\text{k$\_ $},\text{m$\_ $}]\text{:=}}\\ \pmb {\text{Module}[\{ x,w\} ,}\\ \pmb {w=\text{Integrate}[\text{Product}[\text{If}[i\neq j-1,(m-1)x-i,1],\{ i,0,m-1\} ],}\\ \pmb {\{ x,0,(k-1)/(m-1)\} ];(-1){}^{\wedge }(m-j) w/((j-1)! (m-j)!)]}\)
The results obtained for \(m=2\) are
\(\pmb {\text{MatrixForm}[\text{Table}[\text{Wcoeff}[j,k,2],\{ k,1,2\} ,\{ j,1,2\} ]]}\)
\(\left( \begin{smallmatrix} 0 & 0 \\ \frac{1}{2} & \frac{1}{2} \\ \end{smallmatrix} \right)\)
Because \(x_{i,1}=x_i\) şi \(x_{i,2}=x_{i+1}\) the recurrence formula (3.12)-(3.13) becomes
\begin{eqnarray} u^{(n+1)}_{i+1}& =& u_i+\tfrac {h}{2}\left(f(x_i,u_i)+f(x_{i+1},u^{(n)}_{i+1})\right); \label{ettrapez}\\ u^{(0)}_{i+1} & =& u_i. \nonumber \end{eqnarray}For \(m=3\) the results are
\(\pmb {\text{MatrixForm}[\text{Table}[\text{Wcoeff}[j,k,3],\{ k,1,3\} ,\{ j,1,3\} ]]}\)
\(\left( \begin{smallmatrix} 0 & 0 & 0 \\ \frac{5}{24} & \frac{1}{3} & -\frac{1}{24} \\ \frac{1}{6} & \frac{2}{3} & \frac{1}{6} \\ \end{smallmatrix} \right)\)
In this case \(x_{i,1}=x_i, x_{i,2}=\tfrac {1}{2}(x_i+x_{i+1})\stackrel{\mathrm{def}}{=}x_{i+\frac{1}{2}}, x_{i,3}=x_{i+1}\) and the recurrence formulas (3.12)-(3.13) become
\begin{align} u^{(n+1)}_{i+\frac{1}{2}} =& u_i+ h\Big(\tfrac {5}{24}f(x_i,u_i)+\tfrac {1}{3}f(x_{i+\frac{1}{2}},u^{(n)}_{i+\frac{1}{2}})) -\tfrac {1}{24}f(x_{i+1},u^{(n)}_{i+1})\Big);\nonumber \\ u^{(n+1)}_{i+1} =& u_i+ \tfrac {h}{6}\Big(f(x_i,u_i)+4f(x_{i+\frac{1}{2}},u^{(n)}_{i+\frac{1}{2}}) +f(x_{i+1},u^{(n)}_{i+1})\Big);\label{etsimpson}\\ u^{(0)}_{i+\frac{1}{2}} =& u_i; \nonumber \\ u^{(0)}_{i+1} =& u_i. \nonumber \end{align}In matrix form the above relations are
\[ \left[\begin{array}{l} u_i \\ u^{(n+1)}_{i+\frac{1}{2}} \\ u^{(n+1)}_{i+1} \end{array}\right]= \left[\begin{array}{l} u_i \\ u_i \\ u_i \end{array}\right]+h\left(\begin{array}{ccc} 0 & 0 & 0\\ \frac{5}{24} & \frac{1}{3} & -\frac{1}{24} \\ \frac{1}{6} & \frac{2}{3} & \frac{1}{6} \end{array}\right) \left[\begin{array}{l} f(x_i,u_i) \\ f(x_{i+\frac{1}{2}},u^{(n)}_{i+\frac{1}{2}})\\ f(x_{i+1},u^{(n)}_{i+1}) \end{array}\right]. \]Transposing the above equality we get the form corresponding to (3.14).
To compute \(u_{i+1}\) we observe that for \(m=2\) the trapezoidal rule (3.15), while for \(m=3\) the Simpson integration formula (3.16) are used.
Chebyshev points of second kind \(\xi _j=\cos {\frac{(j-1)\pi }{m-1}},\ j\in \{ 1,\ldots ,m\} .\) Then
\[ w_{j,k}=\tfrac {1}{2}\int _{-1}^{\xi _k}\tilde{l}_j(\xi )\mathrm{d}\xi = \tfrac {(-1)^{j-1}2^{m-3}\gamma _j}{m-1}\int _{-1}^{\xi _k}\prod _{k=1,k\not=j}^m(\xi -\xi _k)\mathrm{d}\xi \]with \(\gamma _j=\left\{ \begin{array}{lcl} 0.5 & \mbox{if} & j\in \{ 1,m\} \\ 1 & \mbox{if} & j\in \{ 2,\ldots ,m-1\} \end{array}\right..\)
The nodes are the roots of an orthogonal polynomial. Now we suppose that the polynomial \(p_m(\xi )=\prod _{j=1}^m(\xi -\xi _j)\) is orthogonal to \(\mathbb {P}_{m-1},\) the set of polynomials of degree at most \(m-1,\) with the weight \(\rho (\xi )\) on the interval \(I=[a,b].\) In this case the Lagrange fundamental polynomials \(\tilde{l}_j(\xi ),\ j\in \{ 1,\ldots ,m\} \) are orthogonal.
If \(\rho (\xi )=1, I=[a,b]\) then \(p_m(\xi )=\tfrac {m!}{(2m)!}\tfrac {\mathrm{d}^m}{\mathrm{d}^m\xi }(\xi -a)^m(\xi -b)^m\) is the Laguerre polynomial. For \(a=0, b=1\) and \(m=1\) following results are obtained
\begin{eqnarray*} p_1(\xi ) & =& \xi -\tfrac {1}{2} \quad \Rightarrow \quad \xi _1=\tfrac {1}{2}\\ w_{1,1} & =& \int _0^{\frac{1}{2}}\mathrm{d}\xi =\tfrac {1}{2}\\ u^{(n+1)}_{i+\frac{1}{2}} & =& u_i+\tfrac {h}{2}f(x_{i+\frac{1}{2}},u^{(n)}_{i+\frac{1}{2}}) \end{eqnarray*}Again we observe that \(u(x_{i+\frac{1}{2}})\) is computed using the rectangle rule in the right hand side of (2.5).
The Chebyshev polynomials \(p_m(\xi )=\tfrac {1}{2^{m-1}}\cos (m\arccos {\xi }),\ m\in \mathbb {N},\) are orthogonal with the weight \(\rho (\xi )=\tfrac {1}{\sqrt{1-\xi ^2}}\) in \(I=[-1,1].\)
The nodes will be
\[ \xi _j=\cos {\tfrac {(2j-1)\pi }{2m}} \quad \Rightarrow \quad x_{i,j}=x_i+\tfrac {h}{2}(\xi _j+1),\ j\in \{ 1,2,\ldots ,m\} . \]The biggest node is \(x_{i,1}.\) The Lagrange fundamental polynomials are
\[ \tilde{l}_j(\xi )=\tfrac {2^{m-1}}{m}(-1)^{j-1}\sin {\tfrac {(2j-1)\pi }{2m}} \prod _{\stackrel{\mu =1}{\mu \not=j}}^m\left(\xi -\cos {\tfrac {(2\mu -1)\pi }{2m}}\right) \]and
\[ w_{j,k}=\tfrac {2^{m-2}}{m}(-1)^{j-1}\sin {\tfrac {(2j-1)\pi }{2m}} \int _{-1}^{\cos {\frac{(2k-1)\pi }{2m}}}\prod _{\stackrel{\mu =1}{\mu \not=j}}^m\left(\xi -\cos {\tfrac {(2\mu -1)\pi }{2m}}\right) \mathrm{d}\xi \]The integral can be analytically computed but it involves rounding errors.
3.1 The convergence of the method
When \(h,\) the distance between two mesh points \(x_i\) and \(x_{i+1}=x_i+h,\) is small enough the convergence of Picard iterations is provided by the contraction condition. We recall that \(m,\) the number of the reference set is fixed. The convergence problem refers to the behavior of the numerical solution \((u_0,u_1,\ldots ,u_M)\) to the analytical solution \((y(x_0),y(x_1),\ldots ,y(x_M)).\)
We suppose that:
The function \(f(x,y(x))\) is continuous and then there exists a constant \(K_1{\gt}0\) such that
\[ \max _{1\le \mu \le N}\max _{x\in [x_0,x_f]}|f^{\mu }(x,y(x))|\le K_1. \]The function \(f(x,y)\) have continuous partial derivatives of order \(m\) for any \(x\in [x_0,x_f], y\in \mathbb {R}^N.\) There exists \(K_m{\gt}0\) such that
\[ \max _{1\le \mu \le N}\max _{x\in [x_0,x_f]}\left|\tfrac {\mathrm{d}^mf^{\mu }(x,y(x))|}{\mathrm{d}x^m}\right|\le K_m. \]
In any interval \([x_i,x_{i+1}]\) the following equality is valid, [ 10 , Th. 2.1.4.1 ] ,
where \(\eta _{\mu }\in [x_i,x_{i+1}].\)
We denote by \(R^{\mu }(x)\) the right hand side of the above equality and then \(\max _{x\in [x_i,x_{i+1}]}|R^{\mu }(x)|\le \frac{K_m}{m!}h^m.\) If \(R(x)=(R^1(x),\ldots ,R^N(x)) \) then (2.4) implies the vectorial relation
and \(\| \int _{x_i}^xR(s)\mathrm{d}s\| \le \tfrac {K_m}{m!}h^{m+1}.\)
We make the following notations
and additionally
We emphasize that \(n\) represents the number of iterations on an interval \([x_i,x_{i+1}].\) This number differs from one interval to another. For simplicity we omitted the index \(i\) when \(n\) is written.
Several times the following theorem will be used
If \((z_k)_{k\in \mathbb {N}}\) is a sequence of nonnegative numbers such that
then
The above inequality implies: if \(a{\gt}1\) then \(z_k\le a^k\left(z_0+\tfrac {b}{a-1}\right)\) and if \(a{\lt}1\) then \(z_k\le a^kz_0+\tfrac {b}{1-a}.\)
The following result of convergence occurs:
If the above assumptions take place then
that is, the convergence of the method.
For \(n=0\) the equalities hold:
and then we deduce
If \(n{\gt}0,\) for \(x=x_{i,k}\) the equality (3.17) may be written as
Subtracting (3.12) from (3.18)we obtain
It follows that
If \(h\) is small enough (\(h\tilde{L}\tilde{w}{\lt}1\)) then
Evaluating \(e_i\) we distinguish two cases depending on the definition of \(u_{i+1}\):
or
Corresponding to the two cases, from (3.17) we obtain the equalities
and respectively
Computing \(y(x_{i+1})-u_{i+1}\) it results
respectively
It follows that
and
We remark that between the two estimates only the upper index of \(r_i\) differs. This justifies that in the second case \(m\) additional integrals must be computed.
From hereon it is sufficient to consider only the first case. Using (3.19) we obtain
Because \(h\tilde{L}\tilde{w}{\lt} 1 \ \Rightarrow \ (h\tilde{L}\tilde{w})^n\le h\tilde{L}\tilde{w}\) the above inequality becames
Consequently
Because \(e_0=0,\) from the above inequality it results that:
for \(h\searrow 0\ \Leftrightarrow \ M\rightarrow \infty .\)
4 Picard iterations with a variable reference set
We shall keep some of the above introduced notations and we shall define those that differ.
Let \(a\le \xi _1^m{\lt}\xi _2^m{\lt}\ldots {\lt}\xi _m^m\le b\) be the roots of the polynomial \(p_m(x),\) where \((p_m)_{m\in \mathbb {N}}\) is a sequence of orthogonal polynomials with the weight \(\rho \in L_2[a,b]\) on the interval \([a,b].\) It is assumed that \(\frac{1}{\rho }\in L_2[a,b],\) too. These are requirements of the convergence theorem [ 8 ] .
If \(\varphi _i\) is the affine function transforming \([a,b]\) onto \([x_i,x_{i+1}]\) then the nodes are introduced
For \(x\in [x_i,x_{i+1}],\) we define
where \(\zeta =\varphi _i^{-1}(x)\) and
The vectors \(u_{i,j}^m\) are defined iteratively
It was taken into account that \(\tilde{l}_1(\xi )=1.\) As a rule
for \(k\in \{ 1,2,\ldots ,m+1\} \) şi \(m\in \mathbb {N}^*.\)
We must compute
too.
The computation of the vectors \(u_{i,k}^{m+1},\ k\in \{ 1,2,\ldots ,m+1\} ,\ u_{i+1}^{m+1}\) can be written in matrix form. For simplicity we denote
and the matrix
The following equality holds
For an imposed tolerance \(\varepsilon {\gt}0,\) the iterations occurs until the condition \(\| u_{i+1}^{m+1}-u_i^{m}\| {\lt}\varepsilon \) is fulfilled. The initial approximation is \(u_{i+1}^1=u_i.\) If the above condition is fulfilled then we set \(u_{i+1}=u_{i+1}^{m+1}.\)
A convergence result is given in [ 8 ] .
5 Stiff problems
From (2.3), if \(s=x_0+h\sigma \) then
with \(x\in [x_0,x_0+h]\ \Leftrightarrow \ \sigma \in [0,1].\)
Setting
we derive that \(v(0)=0\) and
Following [ 4 ] , [ 2 ] , by the stabilization principle, the solution of the partial differential system
has the property, cf. [ 4 ] , [ 2 ] ,
We give a numerical solution to find an approximation of the solution of (5.19).
Let be \(\tau {\gt}0\) and the sequence \(t^n=n\tau ,\ n\in \mathbb {N}.\) The equation (5.19) may be rewritten as
and integrating from \(n\tau \) to \((n\! +\! 1)\tau \) it results
Without changing the notation for \(w,\) we substitute in (5.21) \(f(x_0+h\sigma ,y_0+h w(\sigma ,\eta ))\) by a Lagrange interpolation polynomial
where \(0=\xi _1{\lt}\xi _2{\lt}\ldots {\lt}\xi _m=1.\)
We denote \(w^n(\zeta )=w(\zeta ,t^n)\) and in the right hand side of (5.22) we take \(w(\xi _j,\eta )=w^n(\xi _j),\) for any \(j\in \{ 1,2,\ldots ,m\} \) and \(\eta \in [n\tau ,(n+1)\tau ].\) Then
Denoting \(w^n_j=w^n(\xi _j),\) for \(\zeta =\xi _k,\ k\in \{ 1,2,\ldots ,m\} \) we obtain the iterative relations
The iterations occurs until the stopping condition \(\max _{1\le j\le m}\| w^{n+1}_j-w^n_j\| {\lt}\varepsilon \) is fulfilled. Here \(\varepsilon {\gt}0\) is a tolerance. According to (5.20) we consider \(v(1)=w^{n+1}_j\) and the procedure continues with \(u_{i+1}=u_i+h w^{n+1}_m.\)
6 Numerical experiments
Using computer programs based on these methods we solved the following IVPs:
( [ 9 , p. 234 ] )
\[ \left\{ \begin{array}{lcl} \dot{y} & = & y\tfrac {4(x+2)^3-y}{(x+2)^4-1},\qquad x\in [0,1],\\ y(0) & =& 15 \end{array}\right. \]with the solution \(y(x)=1+(x+2)+(x+2)^2+(x+2)^3.\)
For \(M=5\) and the tolerance \(\varepsilon =10^{-5}\) the maximum error
\(\max _{0\le i\le M}\| y(x_i)-u_i\| \) and \(N_f,\) the number of calling the function \(f,\) are given in Table 6..Fixed equidistant
Variable reference set
reference set \(m=3\)
Error
\(N_f\)
Error
\(N_f\)
1.82591e-08
75
8.94274e-08
99
( [ 9 , p. 244 ] )
\[ \left\{ \begin{array}{lclcl} \dot{y_1} & = & y_2,& \quad & y_1(0)=1, \qquad x\in [0,x_f],\\ \dot{y_2} & = & -\frac{y_1}{r^3},& \quad & y_2(0)=0, \\ \dot{y_3} & = & y_4,& \quad & y_3(0)=0, \\ \dot{y_4} & = & -\frac{y_3}{r^3},& \quad & y_4(0)=1, \end{array}\right. \]where \(r=\sqrt{y_1^2+y_3^2}\) and with the solution \(y_1=\cos {x}, y_2=-\sin {x}, y_3=\sin {x}, y_4=\cos {x}.\)
The results of our numerical experiments are listed in Table 6..
Fixed equidistant
Variable reference set
reference set \(m=3\)
\(x_f\)
\(M\)
\(\varepsilon \)
Error
\(N_f\)
Error
\(N_f\)
\(2\pi \)
10
\(10^{-5}\)
0.0247309
300
6.47998e-05
550
\(2\pi \)
10
\(10^{-9}\)
0.0246415
480
2.24345e-09
1050
\(4\pi \)
10
\(10^{-5}\)
0.888217
534
0.000142862
966
\(4\pi \)
20
\(10^{-9}\)
0.0496889
960
1.05491e-08
2100
\(6\pi \)
10
\(10^{-5}\)
14.4197
762
6.23799e-05
1530
\(6\pi \)
40
\(10^{-9}\)
0.0232977
1560
3.06542e-09
3640
Now we compare the results obtained using equidistant nodes and Chebyshev points of second kind for the reference set. For the same example the obtained results are given in Table 6..
Fixed equidistant
Chebyshev fixed
reference set \(m=5\)
\(x_f\)
\(M\)
\(\varepsilon \)
Error
\(N_f\)
Error
\(N_f\)
\(2\pi \)
10
\(10^{-5}\)
6.93002e-05
400
2.69646e-05
400
\(2\pi \)
10
\(10^{-9}\)
1.91509e-05
650
8.13527e-06
650
\(4\pi \)
10
\(10^{-5}\)
0.00215349
600
0.000338729
551
\(4\pi \)
20
\(10^{-9}\)
3.85763e-05
1300
1.6391e-05
1300
\(6\pi \)
10
\(10^{-5}\)
0.0275954
900
0.0164587
820
\(6\pi \)
40
\(10^{-9}\)
1.00764e-05
2200
4.18516e-06
2200
As expected, the results using Chebyshev points of second kind are better than that obtained using equidistant nodes, due to the better approximation property of Lagrange interpolation polynomial with Chebyshev points of second kind toward the equidistant points, [ 11 ] .
( [ 9 , p. 245 ] ) Keeping the differential system as in the previous example but changing the initial value conditions to \(y(0)=[0.4,0,0,2],\) for \(x_f=2\pi ,\ M=20\) and \(\varepsilon =10^{-9}\) with the method based on variable reference set we obtained \(\max _{0\le i\le M}\| y(x_i)-u_i\| =2.94126\cdot 10^{-9}\) and \(N_f=1400.\)
In this case the solution is
\[ y(x)=[\cos {u}-0.6, \tfrac {-\sin {u}}{1-0.6 \cos {u}}, 0.8\sin {u}, \tfrac {0.8\cos {u}}{1-0.6\cos {u}}], \]where \(x=u-0.6\sin {u}.\)
Based on the previous examples the method with variable number of reference points is more efficient than the method with fixed number reference points, but we cannot deduce theoretically such a conclusion.
Using the method for stiff problems presented above we solved:
- \[ \left\{ \begin{array}{lclcl} \dot{y_1} & = & 998y_1+1998 y_2,& \quad & y_1(0)=1, \qquad x\in [0,1],\\ \dot{y_2} & = & -999y_1-1999 y_2,& \quad & y_2(0)=0, \\ \end{array}\right. \]
with the solution \(y_1=2e^{-x}-e^{-1000x}, y_2=-e^{-x}+e^{-1000x}.\)
For \(\tau =10\) the results are given in Table 6.. We recall that \(\tau \) is the length of the step for the \(t\) variable of the stabilization principle.
Fixed equidistant
Chebyshev fixed
reference set \(m=5\)
\(M\)
\(\varepsilon \)
Error
\(N_f\)
Error
\(N_f\)
300
\(10^{-5}\)
0.00164977
8585
0.000402419
8435
500
\(10^{-7}\)
0.000128781
10700
4.35037e-05
10555
- \[ \begin{array}{lcl} \dot{y} & = & -20y, \qquad y(0)=1,\quad x\in [0,1]. \end{array} \]
For \(\tau =10, M=20\) and \(\varepsilon =10^{-7}\) the results are given in Table 6..
Fixed equidistant
Chebyshev fixed
reference set \(m=5\)
Error
\(N_f\)
Error
\(N_f\)
1.19382e-06
800
4.58431e-07
785
To make the results reproducible we provide some code at https://github.com/e-scheiber/scilab-ivpsolvers.git.
The author wishes to thank the referee for suggesting many improvements of this paper.
Bibliography
- 1
X. Bai, Modified Chebyshev-Picard Iteration Methods for Solution of Initial Value and Boundary Value Problems. PhD Dissertation, 2010, Texas A&M University.
- 2
- 3
F.M. Causley, C.D. Seal, On the Convergence of Spectral Deferred Correction Methods. arXiv:1706.06245v1 [math.NA], 2017.
- 4
- 5
- 6
- 7
E. Hairer, G. Wanner, S. Nørsett, Solving Ordinary Differential Equations. I Nonstiff Problems, Second Ed, Springer, Berlin, 1993.
- 8
E. Scheiber, A multistep method to solve the initial value problem. The 4th Romanian-German Seminar on Approximation Theory and its Applications. Braşov, 2000 (ed. H. Gonska, D. Kacso, L. Beutel) Gerhard Mercator Universitat, Duisburg, 124–132.
- 9
L.F. Shampine, M.K. Gordon, Computer solution of ordinary differential equation. The initial value problem, W.H. Freeman and Company, San Francisco, 1975.
- 10
J. Stoer, R. Bulirsch, Introduction to Numerical Analysis, Springer-Verlag, New York, 1993.
- 11
N.L. Trefethen, ApproximationTheory and Approximation Practice, SIAM, Philadelphia, 2012.
- 12
***, http://mathfaculty.fullerton.edu/mathews/n2003/PicardIterationMod.html, 2017.