On the numerical Picard iterations with collocations for the Initial Value Problem

Ernest Scheiber\(^\ast \)

January 21, 2018. Accepted: June 17, 2019. Published online: October 10, 2019.

\(^\ast \)e-mail: scheiber@unitbv.ro.

Some variants of the numerical Picard iterations method are presented to solve an IVP for an ordinary differential system. The term numerical emphasizes that a numerical solution is computed. The method consists in replacing the right hand side of the differential system by Lagrange interpolation polynomials followed by successive approximations. In the case when the number of interpolation point is fixed a convergence result is given. For stiff problems, starting from a stabilization principle, it is given a variant of the Picard iteration method.

Finally, some numerical experiments are reported.

MSC. 65L05, 65L60, 65L04.

Keywords. Picard iterations, initial value problem, collocation.

1 Introduction

This paper presents variants of the Picard iterations to solve an initial value problem (IVP) for ordinary differential equations. On subintervals the right hand side of the differential system is replaced by Lagrange interpolation polynomials and then successive approximations are used. The interpolation nodes are the images of a set of reference points. The number of these reference points can be fixed or variable, i.e. increasing in number [ 8 ] .

When the number of reference nodes is fixed the approximations of the solution of the IVP are computed by collocations. A convergence result is given. This case appears in [ 7 , p. 211 ] . In [ 3 ] the spectral deferred correction is defined adding a term to the iteration formula and the convergence of that method is proved.

If the number of reference points increases then the values of the unknown function are determined iteratively [ 8 ] .

The idea to replace the right hand side of a differential equation with an interpolation polynomial occurs in the multi-step methods, but instead of using the points of the main mesh a reference set of points is used. The usage of an iterative process to obtain some convergence is present in predictor-corrector methods.

We use the terminology numerical Picard iterations to emphasize that the method builds a numerical solution. For an IVP the usual Picard iterations are exemplified with Computer Algebra code in [ 12 ] .

For stiff problems, starting from a stabilization principle, [ 4 ] , [ 2 ] , we derived a variant of the Picard iteration method.

There is another approach to the numerical Picard iterations for an IVP, where the approximations are a linear form of Chebyshev polynomials [ 6 ] , [ 1 ] , [ 3 ] .

Sometimes to verify a numerical computation, it is a practical rule to use two different tools or methods. The presented methods offer an alternative to solve an IVP.

The paper is organized as follows. After introducing the numerical Picard iterations method in Section 2, two cases are presented in the next sections. In Section 3 the Picard iteration method is studied when the reference set contains a fixed number of points, while in Section 4 the Picard iteration method is considered with an increasing in number of points of the reference set. In Section 5 the case of a stiff problem is treated. In the last section some results of our computational experiments are presented.

2 Numerical Picard iterations

Let the IVP be given by

\begin{eqnarray} \dot{y}(x) & =& f(x,y(x)), \qquad x\in [x_0,x_f],\label{sac1}\\ y(x_0) & =& y^0, \label{sac2} \end{eqnarray}

where the function \(f:[x_0,x_f]\times \mathbb {R}^N\rightarrow \mathbb {R}^N\) has the components \(f=(f^1,\ldots ,f^N)\).

In \(\mathbb {R}^N\) for \(y=(y^1,\ldots ,y^N)\) we shall use the norm \(\| y\| =\max _{1\le j\le N}|y^j|.\)

We assume that \(f\) is Lipschitz continuous, i.e. there exists \(L{\gt}0\) such that

\[ |f^{\mu }(x,y_1)-f^{\mu }(x,y_2)|\le L\sum _{j=1}^N|y_1^j-y_2^j|\qquad \forall \ y_1,y_2\in \mathbb {R}^N,\ \mu \in \{ 1,2,\ldots ,N\} \]

and consequently

\[ \| f(x,y_1)-f(x,y_2)\| \le \tilde{L}\| y_1-y_2\| , \]

where \(\tilde{L}=NL.\)

The IVP (2.1)-(2.2) may be reformulated as the integral equation

\begin{equation} \label{sac3} y(x)=y^0+\int _{x_0}^xf(s,y(s))\mathrm{d}s. \end{equation}
2.3

Within these hypotheses the problem (2.1)-(2.2) or (2.3) has a unique solution. This solution may be obtained with the Picard iterations

\begin{eqnarray*} y^{(n+1)}(x)& =& y^0+\int _{x_0}^xf(s,y^{(n)}(s))\mathrm{d}s,\quad n\in \mathbb {N},\\ y^{(0)}(x) & =& y^0, \end{eqnarray*}

for \(x\in [x_0,x_f].\) The sequence \((y^{(n)}(x))_{k\in \mathbb {N}}\) converges uniformly in \([x_0,x_f]\) to the solution of IVP.

Let \(M\) be a positive integer, \(h=\tfrac {x_f-x_0}{M}\) and the mesh be defined as \(x_i=x_0+ih,\ i\in \{ 0,1,\ldots ,M\} .\) The numerical solution is given by the sequence \(u_h=(u_0,u_1,\ldots ,u_M),\) where each \(u_i=u(x_i)\) is an approximation of \(y(x_i).\)

If \(u_i\) was computed, on the interval \([x_i,x_{i+1}]\) the function \(f(s,y(s))\) under the integral in

\begin{equation} \label{sac14} y(x)=y(x_i)+\int _{x_i}^xf(s,y(s))\mathrm{d}s \end{equation}
2.4

will be replaced by a Lagrange interpolation polynomial

\begin{equation} \label{sac4} u(x)=u(x_i)+\int _{x_i}^xL(\mathbb {P}_{m-1};x_{i,1},x_{i,2},\ldots ,x_{i,m};f(\cdot ,u(\cdot )))(s)\mathrm{d}s,\quad x\in [x_i,x_{i+1}]. \end{equation}
2.5

The interpolation nodes \(x_i\le x_{i,1}{\lt}x_{i,2}{\lt}\ldots {\lt}x_{i,m}\le x_{i+1}\) are fixed by a certain rule. The used notation states the interpolation constraints

\[ L(\mathbb {P}_{m-1};x_{i,1},x_{i,2},\ldots ,x_{i,m};f(\cdot ,u(\cdot )))(x_{i,j})=f(x_{i,j},u(x_{i,j})),\quad j\in \{ 1,2,\ldots ,m\} . \]

From (2.5) we deduce

\begin{equation} \label{sac5} u(x)=u(x_i)+\sum _{j=1}^m\left(\int _{x_i}^xl_j(s)\mathrm{d}s\right)f(x_{i,j},u(x_{i,j})), \end{equation}
2.6

where \((l_j)_{1\le j\le m}\) are the Lagrange fundamental polynomials

\begin{equation} \label{sac6} l_j(x)=\tfrac {(x-x_{i,1})\ldots (x-x_{i,j-1})(x-x_{i,j+1})\ldots (x-x_{i,m})} {(x_{i,j}-x_{i,1})\ldots (x_{i,j}-x_{i,j-1})(x_{i,j}-x_{i,j+1})\ldots (x_{i,j}-x_{i,m})}. \end{equation}
2.7

3 Picard iterations with a fixed reference set

In (2.6) the values

\[ u(x_{i,1}), u(x_{i,2}), \ldots , u(x_{i,m}) \]

are unknown. To compute these vectors the collocation method will be used.

Choosing \(x:=x_{i,k}\) in (2.6) we get

\begin{equation} \label{sac7} u(x_{i,k})=u(x_i)+\sum _{j=1}^m\left(\int _{x_i}^{x_{i,k}}l_j(s)\mathrm{d}s\right)f(x_{i,j},u(x_{i,j})), \quad k\in \{ 1,2,\ldots ,m\} . \end{equation}
3.8

The relations (3.8) form a nonlinear system with the unknowns \(u(x_{i,1}),\ldots ,u(x_{i,m}) \in \underbrace{\mathbb {R}^N\times \ldots \times \mathbb {R}^N}_m \simeq \mathbb {R}^{mN}.\)

In order to simplify and provides a unitary approach to the computation of the integrals from (3.8) we fix the nodes \(\xi _1{\lt}\xi _2{\lt}\ldots {\lt}\xi _m\) within an interval \([a,b].\) We call these nodes the reference interpolation nodes. The function

\[ \varphi _i(\xi )=x_i+\tfrac {h}{b-a}(\xi -a) \]

maps the interval \([a,b]\) into \([x_i,x_{i+1}].\) For any \(i\in \{ 0,1,\ldots ,M-1\} \) the nodes \(x_{i,j}\) will be defined as

\[ x_{i,j}=\varphi _i(\xi _j),\qquad \forall \ j\in \{ 1,2,\ldots ,m\} . \]

If \(s=\varphi _i(\xi )\) then

\[ l_j(s)=\tfrac {(\xi -\xi _1)\ldots (\xi -\xi _{j-1})(\xi -\xi _{j+1})\ldots (\xi -\xi _m)} {(\xi _j-\xi _1)\ldots (\xi _j-\xi _{j-1})(\xi _j-\xi _{j+1})\ldots (\xi _j-\xi _m)}=\tilde{l}_j(\xi ) \]

and

\[ \int _{x_i}^{x_{i,k}}l_j(s)\mathrm{d}s=\tfrac {h}{b-a}\int _a^{\xi _k}\tilde{l}_j(\xi )\mathrm{d}\xi . \]

Denoting

\[ w_{j,k}=\tfrac {1}{b-a}\int _a^{\xi _k}\tilde{l}_j(\xi )\mathrm{d}\xi \]

the nonlinear system (3.8) becomes

\begin{equation} \label{sac11} u(x_{i,k})=u(x_i)+h\sum _{j=1}^mw_{j,k}f(x_{i,j},u(x_{i,j})), \quad k\in \{ 1,2,\ldots ,m\} . \end{equation}
3.9

In order to prove the existence of a solution of the nonlinear system we shall use a simplified notation \(u(x_{i,k})=u_k,\ k\in \{ 1,2,\ldots ,m\} .\) Then the system (3.9) is written as

\begin{equation} \label{sac12} u_k=u(x_i)+h\sum _{j=1}^mw_{j,k}f(x_{i,j},u_j), \quad k\in \{ 1,2,\ldots ,m\} . \end{equation}
3.10

The operator

\[ \Phi =(\Phi _k)_{1\le k\le m},\quad \mbox{where}\quad \Phi _k:\underbrace{\mathbb {R}^N\times \ldots \times \mathbb {R}^N}_m\rightarrow \mathbb {R}^N \]

is defined by

\[ \Phi _k(u)=u(x_i)+h\sum _{j=1}^mw_{j,k}f(x_{i,j},u_j),\qquad u=(u_1,\ldots ,u_m). \]

The used norm in \(\underbrace{\mathbb {R}^N\times \ldots \times \mathbb {R}^N}_m\) will be

\[ |\! |\! |u|\! |\! |=|\! |\! |(u_1,\ldots ,u_m)|\! |\! |=\sum _{j=1}^m\| u_j\| . \]

If \(u=(u_1,\ldots ,u_m)\) and \(v=(v_1,\ldots ,v_m)\) then following equality is valid

\[ \Phi _k(u)-\Phi _k(v)=h\sum _{j=1}^mw_{j,k}\left(f(x_{i,j},u_j)-f(x_{i,j},v_j)\right),\quad k\in \{ 1,2,\ldots ,m\} . \]

Then

\[ \| \Phi _k(u)-\Phi _k(v)\| \le h \tilde{L}\sum _{j=1}^m|w_{j,k}|\| u_j-v_j\| \]

and

\[ \sum _{k=1}^m\| \Phi _k(u)-\Phi _k(v)\| \le h \tilde{L}\sum _{k,j=1}^m|w_{j,k}|\| u_j-v_j\| . \]

If \(\omega =\max _{1\le j\le m}\sum _{k=1}^m|w_{j,k}|\) the above inequality gives

\begin{equation} \label{etcontr} |\! |\! |\Phi (u)-\Phi (v)|\! |\! |\le h\omega \tilde{L}|\! |\! |u-v|\! |\! |. \end{equation}
3.11

Thus, if \(h\omega \tilde{L}{\lt}1\) then \(\Phi \) is a contraction. Following theorem is a consequence of the above:

Theorem 3.1

For \(h\) small enough \((h{\lt}\frac{1}{\omega L})\) the nonlinear system 3.9 has a unique solution.

In the hypothesis of the above theorem, the nonlinear system (3.9) may be solved using the successive approximation method

\begin{eqnarray} u^{(n+1)}(x_{i,k}) & =& u(x_i)+h\sum _{j=1}^mw_{j,k}f(x_{i,j},u^{(n)}(x_{i,j})), \quad n\in \mathbb {N}\label{sac8}\\ u^{(0)}(x_{i,k}) & =& u(x_i) \label{sac9} \end{eqnarray}

for \(k\in \{ 1,2,\ldots ,m\} .\) Because \(\Phi \) is contraction, (3.11), the sequences

\[ u^{(n)}_{i,j}\stackrel{\mathrm{def}}{=}u^{(n)}(x_{i,j}),\qquad n\in \mathbb {N},\quad j\in \{ 1,2,\ldots ,m\} \]

will converge to the solution of the system (3.9).

The iterative relations (3.12) can be written in matrix form

\begin{equation} \label{sac18} [u^{(n+1)}_{i,1}\ u^{(n+1)}_{i,2}\ldots u^{(n+1)}_{i,m}]=\underbrace{[u_i\ u_i\ldots u_i]}_{m} +h[f_{i,1}\ f_{i,2}\ldots f_{i,m}]\left(\begin{smallmatrix} w_{1,1} & \ldots & w_{1,m}\\ w_{2,1} & \ldots & w_{2,m}\\ \vdots & & \vdots \\ w_{m,1} & \ldots & w_{m,m} \end{smallmatrix}\right), \end{equation}
3.14

where \(f_{i,j}=f(x_{i,j},u^{(n)}_{i,j}), \ j\in \{ 1,\ldots ,m\} .\) Denoting \(u^{(n)}_i=(u^{(n)}_{i,j})_{1\le j\le m}\) the iterations stop when the following condition is fulfilled \(\| u^{(n)}_i-u^{(n-1)}_i\| {\lt} \varepsilon ,\) where \(\varepsilon {\gt}0\) is a tolerance. The initial approximations are chosen as \(u^{(0)}_{i,j}=u(x_i)\) for any \(j\in \{ 1,2,\ldots ,m\} .\)

This method to solve the nonlinear system (3.9) leads to an approximation to the solution of the IVP in the most right node which may differ from \(x_{i+1}\). We point out two variants of the computations:

  • We change the initial mesh such that \(x_{i+1}\) will be the most right node (\(x_{i+1}=\varphi _i(\xi _m)\)) and the computation continue in the interval \([x_{i+1},x_{i+1}+h].\) In this case we have

    \[ u_{i+1}\stackrel{\mathrm{def}}{=}u(x_{i+1})=u^{(n)}_{i,m}. \]
  • In (2.5) we set \(x:=x_{i+1}=\varphi _i(b)\) and

    \[ u_{i+1}\stackrel{\mathrm{def}}{=}u(x_{i+1})=u(x_i)+\tfrac {h}{b-a}\sum _{j=1}^m\bigg(\int _a^b\tilde{l}_j(\xi )\mathrm{d}\xi \bigg)f(x_{i,j},u^{(n)}_{i,j}). \]

    In this way \(m\) new integrals must be computed additionally.

With the new notations we have \(u_0\stackrel{\mathrm{def}}{=}u(x_0)=y^0.\)

The coefficients \(w_{j,k}\) do not depend on the computation interval. We highlight some cases when these coefficients may be easily computed.

Some particular cases

  1. Equidistant nodes. If \(\xi _j=\tfrac {j-1}{m-1},\ j\in \{ 1,2,\ldots ,m\} ,\) then

    \[ w_{j,k}=\int _0^{\xi _k}\tilde{l}_j(\xi )\mathrm{d}\xi =\tfrac {(-1)^{m-j}}{(j-1)!(m-j)!} \int _0^{\frac{k-1}{m-1}}\prod _{\stackrel{\mu =0}{\mu \not=j-1}}^{m-1}\left((m-1)\xi -\mu \right)\mathrm{d}\xi . \]

    The following Mathematica code computes these coefficients:

    \(\pmb {\text{Wcoeff}[\text{j$\_ $},\text{k$\_ $},\text{m$\_ $}]\text{:=}}\\ \pmb {\text{Module}[\{ x,w\} ,}\\ \pmb {w=\text{Integrate}[\text{Product}[\text{If}[i\neq j-1,(m-1)x-i,1],\{ i,0,m-1\} ],}\\ \pmb {\{ x,0,(k-1)/(m-1)\} ];(-1){}^{\wedge }(m-j) w/((j-1)! (m-j)!)]}\)

    The results obtained for \(m=2\) are

    \(\pmb {\text{MatrixForm}[\text{Table}[\text{Wcoeff}[j,k,2],\{ k,1,2\} ,\{ j,1,2\} ]]}\)

    \(\left( \begin{smallmatrix} 0 & 0 \\ \frac{1}{2} & \frac{1}{2} \\ \end{smallmatrix} \right)\)

    Because \(x_{i,1}=x_i\) şi \(x_{i,2}=x_{i+1}\) the recurrence formula (3.12)-(3.13) becomes

    \begin{eqnarray} u^{(n+1)}_{i+1}& =& u_i+\tfrac {h}{2}\left(f(x_i,u_i)+f(x_{i+1},u^{(n)}_{i+1})\right); \label{ettrapez}\\ u^{(0)}_{i+1} & =& u_i. \nonumber \end{eqnarray}

    For \(m=3\) the results are

    \(\pmb {\text{MatrixForm}[\text{Table}[\text{Wcoeff}[j,k,3],\{ k,1,3\} ,\{ j,1,3\} ]]}\)

    \(\left( \begin{smallmatrix} 0 & 0 & 0 \\ \frac{5}{24} & \frac{1}{3} & -\frac{1}{24} \\ \frac{1}{6} & \frac{2}{3} & \frac{1}{6} \\ \end{smallmatrix} \right)\)

    In this case \(x_{i,1}=x_i, x_{i,2}=\tfrac {1}{2}(x_i+x_{i+1})\stackrel{\mathrm{def}}{=}x_{i+\frac{1}{2}}, x_{i,3}=x_{i+1}\) and the recurrence formulas (3.12)-(3.13) become

    \begin{align} u^{(n+1)}_{i+\frac{1}{2}} =& u_i+ h\Big(\tfrac {5}{24}f(x_i,u_i)+\tfrac {1}{3}f(x_{i+\frac{1}{2}},u^{(n)}_{i+\frac{1}{2}})) -\tfrac {1}{24}f(x_{i+1},u^{(n)}_{i+1})\Big);\nonumber \\ u^{(n+1)}_{i+1} =& u_i+ \tfrac {h}{6}\Big(f(x_i,u_i)+4f(x_{i+\frac{1}{2}},u^{(n)}_{i+\frac{1}{2}}) +f(x_{i+1},u^{(n)}_{i+1})\Big);\label{etsimpson}\\ u^{(0)}_{i+\frac{1}{2}} =& u_i; \nonumber \\ u^{(0)}_{i+1} =& u_i. \nonumber \end{align}

    In matrix form the above relations are

    \[ \left[\begin{array}{l} u_i \\ u^{(n+1)}_{i+\frac{1}{2}} \\ u^{(n+1)}_{i+1} \end{array}\right]= \left[\begin{array}{l} u_i \\ u_i \\ u_i \end{array}\right]+h\left(\begin{array}{ccc} 0 & 0 & 0\\ \frac{5}{24} & \frac{1}{3} & -\frac{1}{24} \\ \frac{1}{6} & \frac{2}{3} & \frac{1}{6} \end{array}\right) \left[\begin{array}{l} f(x_i,u_i) \\ f(x_{i+\frac{1}{2}},u^{(n)}_{i+\frac{1}{2}})\\ f(x_{i+1},u^{(n)}_{i+1}) \end{array}\right]. \]

    Transposing the above equality we get the form corresponding to (3.14).

    To compute \(u_{i+1}\) we observe that for \(m=2\) the trapezoidal rule (3.15), while for \(m=3\) the Simpson integration formula (3.16) are used.

    Chebyshev points of second kind \(\xi _j=\cos {\frac{(j-1)\pi }{m-1}},\ j\in \{ 1,\ldots ,m\} .\) Then

    \[ w_{j,k}=\tfrac {1}{2}\int _{-1}^{\xi _k}\tilde{l}_j(\xi )\mathrm{d}\xi = \tfrac {(-1)^{j-1}2^{m-3}\gamma _j}{m-1}\int _{-1}^{\xi _k}\prod _{k=1,k\not=j}^m(\xi -\xi _k)\mathrm{d}\xi \]

    with \(\gamma _j=\left\{ \begin{array}{lcl} 0.5 & \mbox{if} & j\in \{ 1,m\} \\ 1 & \mbox{if} & j\in \{ 2,\ldots ,m-1\} \end{array}\right..\)

    The nodes are the roots of an orthogonal polynomial. Now we suppose that the polynomial \(p_m(\xi )=\prod _{j=1}^m(\xi -\xi _j)\) is orthogonal to \(\mathbb {P}_{m-1},\) the set of polynomials of degree at most \(m-1,\) with the weight \(\rho (\xi )\) on the interval \(I=[a,b].\) In this case the Lagrange fundamental polynomials \(\tilde{l}_j(\xi ),\ j\in \{ 1,\ldots ,m\} \) are orthogonal.

    • If \(\rho (\xi )=1, I=[a,b]\) then \(p_m(\xi )=\tfrac {m!}{(2m)!}\tfrac {\mathrm{d}^m}{\mathrm{d}^m\xi }(\xi -a)^m(\xi -b)^m\) is the Laguerre polynomial. For \(a=0, b=1\) and \(m=1\) following results are obtained

      \begin{eqnarray*} p_1(\xi ) & =& \xi -\tfrac {1}{2} \quad \Rightarrow \quad \xi _1=\tfrac {1}{2}\\ w_{1,1} & =& \int _0^{\frac{1}{2}}\mathrm{d}\xi =\tfrac {1}{2}\\ u^{(n+1)}_{i+\frac{1}{2}} & =& u_i+\tfrac {h}{2}f(x_{i+\frac{1}{2}},u^{(n)}_{i+\frac{1}{2}}) \end{eqnarray*}

      Again we observe that \(u(x_{i+\frac{1}{2}})\) is computed using the rectangle rule in the right hand side of (2.5).

    • The Chebyshev polynomials \(p_m(\xi )=\tfrac {1}{2^{m-1}}\cos (m\arccos {\xi }),\ m\in \mathbb {N},\) are orthogonal with the weight \(\rho (\xi )=\tfrac {1}{\sqrt{1-\xi ^2}}\) in \(I=[-1,1].\)

      The nodes will be

      \[ \xi _j=\cos {\tfrac {(2j-1)\pi }{2m}} \quad \Rightarrow \quad x_{i,j}=x_i+\tfrac {h}{2}(\xi _j+1),\ j\in \{ 1,2,\ldots ,m\} . \]

      The biggest node is \(x_{i,1}.\) The Lagrange fundamental polynomials are

      \[ \tilde{l}_j(\xi )=\tfrac {2^{m-1}}{m}(-1)^{j-1}\sin {\tfrac {(2j-1)\pi }{2m}} \prod _{\stackrel{\mu =1}{\mu \not=j}}^m\left(\xi -\cos {\tfrac {(2\mu -1)\pi }{2m}}\right) \]

      and

      \[ w_{j,k}=\tfrac {2^{m-2}}{m}(-1)^{j-1}\sin {\tfrac {(2j-1)\pi }{2m}} \int _{-1}^{\cos {\frac{(2k-1)\pi }{2m}}}\prod _{\stackrel{\mu =1}{\mu \not=j}}^m\left(\xi -\cos {\tfrac {(2\mu -1)\pi }{2m}}\right) \mathrm{d}\xi \]

      The integral can be analytically computed but it involves rounding errors.

3.1 The convergence of the method

When \(h,\) the distance between two mesh points \(x_i\) and \(x_{i+1}=x_i+h,\) is small enough the convergence of Picard iterations is provided by the contraction condition. We recall that \(m,\) the number of the reference set is fixed. The convergence problem refers to the behavior of the numerical solution \((u_0,u_1,\ldots ,u_M)\) to the analytical solution \((y(x_0),y(x_1),\ldots ,y(x_M)).\)

We suppose that:

  • The function \(f(x,y(x))\) is continuous and then there exists a constant \(K_1{\gt}0\) such that

    \[ \max _{1\le \mu \le N}\max _{x\in [x_0,x_f]}|f^{\mu }(x,y(x))|\le K_1. \]
  • The function \(f(x,y)\) have continuous partial derivatives of order \(m\) for any \(x\in [x_0,x_f], y\in \mathbb {R}^N.\) There exists \(K_m{\gt}0\) such that

    \[ \max _{1\le \mu \le N}\max _{x\in [x_0,x_f]}\left|\tfrac {\mathrm{d}^mf^{\mu }(x,y(x))|}{\mathrm{d}x^m}\right|\le K_m. \]

In any interval \([x_i,x_{i+1}]\) the following equality is valid, [ 10 , Th. 2.1.4.1 ] ,

\begin{align*} & f^{\mu }(x,y(x))-L(\mathbb {P}_{m-1};x_{i,1},x_{i,2},\ldots ,x_{i,m};f^{\mu }(\cdot ,y(\cdot )))(x)= \\ & =\tfrac {1}{m!}\prod _{j=1}^m(x-x_{i,j}) \left.\tfrac {\mathrm{d}^mf^{\mu }(x,y(x))}{\mathrm{d}x^m}\right|_{x=\eta _{\mu }} \end{align*}

where \(\eta _{\mu }\in [x_i,x_{i+1}].\)

We denote by \(R^{\mu }(x)\) the right hand side of the above equality and then \(\max _{x\in [x_i,x_{i+1}]}|R^{\mu }(x)|\le \frac{K_m}{m!}h^m.\) If \(R(x)=(R^1(x),\ldots ,R^N(x)) \) then (2.4) implies the vectorial relation

\begin{equation} \label{sac15} y(x)=y(x_i)+\int _{x_i}^xL(\mathbb {P}_{m-1};x_{i,1},x_{i,2},\ldots ,x_{i,m};f(\cdot ,y(\cdot )))(s)\mathrm{d}s+ \int _{x_i}^xR(s)\mathrm{d}s \end{equation}
3.17

and \(\| \int _{x_i}^xR(s)\mathrm{d}s\| \le \tfrac {K_m}{m!}h^{m+1}.\)

We make the following notations

\[ \begin{array}{lcll} e_i & =& \| y(x_i)-u_i\| , & i\in \{ 0,1,\ldots ,M\} ;\\ r^{(n)}_{i,j} & =& \| y(x_{i,j})-u^{(n)}_{i,j}\| ,& j\in \{ 1,2,\ldots ,m\} ;\\ r^{(n)}_i & =& \max _{1\le j\le m}r^{(n)}_{i,j}. & \\ \end{array} \]

and additionally

\[ w= \max \bigg\{ \max _{1\le j,k\le m}|w_{j,k}|,\max _{1\le j\le m}\tfrac {1}{b-a}\bigg|\int _a^b\tilde{l}_j(\xi )\mathrm{d}\xi \bigg|\bigg\} , \qquad \tilde{w}=m w. \]

We emphasize that \(n\) represents the number of iterations on an interval \([x_i,x_{i+1}].\) This number differs from one interval to another. For simplicity we omitted the index \(i\) when \(n\) is written.

Several times the following theorem will be used

Theorem 3.2

If \((z_k)_{k\in \mathbb {N}}\) is a sequence of nonnegative numbers such that

\[ z_{k+1}\le a z_k+b\quad \forall \ k\in \mathbb {N}\quad \mbox{\c{s}i}\quad a,b{\gt}0,\ a\not=1, \]

then

\[ z_k\le a^k z_0+ b\tfrac {a^k-1}{a-1}, \qquad \forall \ k\in \mathbb {N}. \]

The above inequality implies: if \(a{\gt}1\) then \(z_k\le a^k\left(z_0+\tfrac {b}{a-1}\right)\) and if \(a{\lt}1\) then \(z_k\le a^kz_0+\tfrac {b}{1-a}.\)

The following result of convergence occurs:

Theorem 3.3

If the above assumptions take place then

\[ \lim _{h\searrow 0}\max _{i\in \{ 0,1,\ldots ,M\} }\| y(x_i)-u_i\| =0, \]

that is, the convergence of the method.

Proof â–¼
In the beginning we determine an evaluation for \(r^{(n)}_i.\)

For \(n=0\) the equalities hold:

\begin{align*} y(x_{i,j})-u^{(0)}_{i,j}=& y(x_{i,j})-u_i=\left(y(x_{i,j})-y(x_i)\right)+\left(y(x_i)-u_i\right) \\ =& \int _{x_i}^{x_{i,j}}f(s,y(s))\mathrm{d}s+\left(y(x_i)-u_i\right) \end{align*}

and then we deduce

\[ r^{(0)}_{i,j}\le K_1h +e_i,\ \forall \ j\in \{ 1,2,\ldots ,m\} \quad \Rightarrow \quad r^{(0)}_i\le K_1h +e_i. \]

If \(n{\gt}0,\) for \(x=x_{i,k}\) the equality (3.17) may be written as

\begin{equation} \label{sac16} y(x_{i,k})=y(x_i)+h\sum _{j=1}^mw_{j,k}f(x_i,j,y(x_{i,j}))+\int _{x_i}^{x_{i,k}}R(s)\mathrm{d}s. \end{equation}
3.18

Subtracting (3.12) from (3.18)we obtain

\begin{align*} & y(x_{i,k})-u^{(n+1)}_{i,k}=\\ & =y(x_i)-u_i+h\sum _{j=1}^mw_{j,k}\left(f(x_{i,j},y(x_{i,j}))-f(x_{i,j},u^{(n)}_{i,j})\right)+\int _{x_i}^{x_{i,k}}R(s)\mathrm{d}s. \end{align*}

It follows that

\[ r^{(n+1)}_{i,k}\le e_i+h\tilde{L}\tilde{w}r^{(n)}_i\! +\! \tfrac {K_m}{m!}h^{m\! +\! 1}\quad \Rightarrow \quad r^{(n\! +\! 1)}_i\le e_i+h\tilde{L}\tilde{w}r^{(n)}_i\! +\! \tfrac {K_m}{m!}h^{m+1} \]

If \(h\) is small enough (\(h\tilde{L}\tilde{w}{\lt}1\)) then

\begin{align} r^{(n)}_i& \le (h\tilde{L}\tilde{w})^nr^{(0)}_i+\tfrac {1}{1-h\tilde{L}\tilde{w}}\left(e_i+\tfrac {K_m}{m!}h^{m+1}\right)\nonumber \\ & \le (h\tilde{L}\tilde{w})^n(K_1h +e_i)+\tfrac {1}{1-h\tilde{L}\tilde{w}}\left(e_i+\tfrac {K_m}{m!}h^{m+1}\right)\nonumber \\ \label{sac17} & = \left((h\tilde{L}\tilde{w})^n+\tfrac {1}{1-h\tilde{L}\tilde{w}}\right)e_i+ h^{n+1}(\tilde{L}\tilde{w})^nK_1+\tfrac {K_mh^{m+1}}{m!(1-h\tilde{L}\tilde{w})}. \end{align}

Evaluating \(e_i\) we distinguish two cases depending on the definition of \(u_{i+1}\):

\[ u_{i+1}=u^{(n)}_{i,m}=u_i+h\sum _{j=1}^mw_{j,m}f(x_{i,j},u^{(n-1)}_{i,j}),\qquad (x_{i+1}=\varphi _i(\xi _m)) \]

or

\[ u_{i+1}=u_i+\tfrac {h}{b-a}\sum _{j=1}^m\left(\int _a^b\tilde{l}_j(\xi )\mathrm{d}\xi \right)f(x_{i,j},u^{(n)}_{i,j}), \qquad (x_{i+1}=\varphi _i(b)). \]

Corresponding to the two cases, from (3.17) we obtain the equalities

\[ y(x_{i+1})=y(x_i)+h\sum _{j=1}^mw_{j,m}f(x_{i,j},y(x_{i,j}))+\int _{x_i}^{x_{i,m}}R(s)\mathrm{d}s \]

and respectively

\[ y(x_{i+1})=y(x_i)+\tfrac {h}{b-a}\sum _{j=1}^m\left(\int _a^b\tilde{l}_j(\xi )\mathrm{d}\xi \right)f(x_{i,j},y(x_{i,j}))+\int _{x_i}^{x_{i+1}}R(s)\mathrm{d}s. \]

Computing \(y(x_{i+1})-u_{i+1}\) it results

\begin{align*} y(x_{i+1})\! -\! u_{i+1}=& y(x_i)\! -\! u_i+h\sum _{j=1}^mw_{j,m}\left(f(x_{i,j},y(x_{i,j}))\! -\! u^{(n-1)}_{i,j}\right) \! +\! \int _{x_i}^{x_{i,m}}R(s)\mathrm{d}s, \end{align*}

respectively

\begin{align*} y(x_{i+1})\! -\! u_{i+1}\! =\! & y(x_i)\! -\! u_i\! +\! \tfrac {h}{b-a}\sum _{j=1}^m\left(\int _a^b\tilde{l}_j(\xi )\mathrm{d}\xi \right) \left(f(x_{i,j},y(x_{i,j}))\! -\! u^{(n)}_{i,j}\right)+\\ & +\int _{x_i}^{x_{i+1}}R(s)\mathrm{d}s. \end{align*}

It follows that

\[ e_{i+1}\le e_i+h\tilde{L}\tilde{w}r^{(n-1)}_i+\tfrac {K_m}{m!}h^{m+1} \]

and

\[ e_{i+1}\le e_i+h\tilde{L}\tilde{w}r^{(n)}_i+\tfrac {K_m}{m!}h^{m+1}. \]

We remark that between the two estimates only the upper index of \(r_i\) differs. This justifies that in the second case \(m\) additional integrals must be computed.

From hereon it is sufficient to consider only the first case. Using (3.19) we obtain

\begin{align*} & e_{i+1}\le \\ & \le e_i\! +\! h\tilde{L}\tilde{w}\left( \left((h\tilde{L}\tilde{w})^{n-1}\! +\! \tfrac {1}{1-h\tilde{L}\tilde{w}}\right)e_i\! +\! h^{n}(\tilde{L}\tilde{w})^{n-1}K_1\! +\! \tfrac {K_mh^{m\! +\! 1}}{m!(1-h\tilde{L}\tilde{w})} \right)\! +\! \tfrac {K_m}{m!}h^{m+1}\\ & =e_i\left(1+(h\tilde{L}\tilde{w})^n+\tfrac {h\tilde{L}\tilde{w}}{1-h\tilde{L}\tilde{w}}\right)+ h^{n+1}(\tilde{L}\tilde{w})^nK_1+\tfrac {K_mh^{m+1}}{m!(1-h\tilde{L}\tilde{w})}. \end{align*}

Because \(h\tilde{L}\tilde{w}{\lt} 1 \ \Rightarrow \ (h\tilde{L}\tilde{w})^n\le h\tilde{L}\tilde{w}\) the above inequality becames

\[ e_{i+1}\le e_i\left(1+h\tilde{L}\tilde{w}+\tfrac {h\tilde{L}\tilde{w}}{1-h\tilde{L}\tilde{w}}\right)+ h^2\tilde{L}\tilde{w}K_1+\tfrac {K_mh^{m+1}}{m!(1-h\tilde{L}\tilde{w})}. \]

Consequently

\begin{align*} e_i& \le \Big(1+h\tilde{L}\tilde{w}+\tfrac {h\tilde{L}\tilde{w}}{1-h\tilde{L}\tilde{w}}\Big)^i\Bigg(e_0+ \tfrac {h^2\tilde{L}\tilde{w}K_1+\frac{K_mh^{m+1}}{m!(1-h\tilde{L}\tilde{w})}} {h\tilde{L}\tilde{w}+\frac{h\tilde{L}\tilde{w}}{1-h\tilde{L}\tilde{w}}}\Bigg)\\ & \le e^{i\Big(h\tilde{L}\tilde{w}+\frac{h\tilde{L}\tilde{w}}{1-h\tilde{L}\tilde{w}}\Big)} \Bigg(e_0+ \tfrac {h\tilde{L}\tilde{w}K_1+\frac{K_mh^m}{m!(1-h\tilde{L}\tilde{w})}} {\tilde{L}\tilde{w}+\frac{\tilde{L}\tilde{w}}{1-h\tilde{L}\tilde{w}}}\Bigg). \end{align*}

Because \(e_0=0,\) from the above inequality it results that:

\[ \max _{1\le i\le M}e_i \le e^{(x_f-x_0)\tilde{L}\tilde{w}\left(1+\frac{1}{1-h\tilde{L}\tilde{w}}\right)} \Bigg(\tfrac {h\tilde{L}\tilde{w}K_1+\frac{K_mh^m}{m!(1-h\tilde{L}\tilde{w})}} {\tilde{L}\tilde{w}+\frac{\tilde{L}\tilde{w}}{1-h\tilde{L}\tilde{w}}}\Bigg)\rightarrow 0, \]

for \(h\searrow 0\ \Leftrightarrow \ M\rightarrow \infty .\)

Proof â–¼

4 Picard iterations with a variable reference set

We shall keep some of the above introduced notations and we shall define those that differ.

Let \(a\le \xi _1^m{\lt}\xi _2^m{\lt}\ldots {\lt}\xi _m^m\le b\) be the roots of the polynomial \(p_m(x),\) where \((p_m)_{m\in \mathbb {N}}\) is a sequence of orthogonal polynomials with the weight \(\rho \in L_2[a,b]\) on the interval \([a,b].\) It is assumed that \(\frac{1}{\rho }\in L_2[a,b],\) too. These are requirements of the convergence theorem [ 8 ] .

If \(\varphi _i\) is the affine function transforming \([a,b]\) onto \([x_i,x_{i+1}]\) then the nodes are introduced

\[ x_{i,j}^m=\varphi _i(\xi _j^m),\qquad j\in \{ 1,2,\ldots ,m\} ,\quad m\in \mathbb {N}^*. \]

For \(x\in [x_i,x_{i+1}],\) we define

\begin{align*} u^{m+1}(x)& =u_i+\int _{x_i}^xL(\mathbb {P}_{m-1};x_{i,1}^m,x_{i,2}^m,\ldots ,x_{i,m}^m;f(\cdot ,u^m(\cdot )))(s)\mathrm{d}s=\\ \nonumber & =u_i+\sum _{j=1}^m\left(\int _{x_i}^xl_j^m(s)\mathrm{d}s\right)f(x_{i,j}^m,u(x_{i,j}^m))\\ & =u_i+\tfrac {h}{b-a}\sum _{j=1}^m\left(\int _a^{\zeta }\tilde{l}_j^m(\xi )\mathrm{d}\xi \right)f(x_{i,j}^m,u(x_{i,j}^m)), \end{align*}

where \(\zeta =\varphi _i^{-1}(x)\) and

\[ \tilde{l}_j^m(\xi )=\tfrac {(\xi -\xi _1^m)\ldots (\xi -\xi _{j-1}^m)(\xi -\xi _{j+1}^m)\ldots (\xi -\xi _m^m)} {(\xi _j^m-\xi _1^m)\ldots (\xi _j^m-\xi _{j-1}^m)(\xi _j^m-\xi _{j+1}^m)\ldots (\xi _j^m-\xi _m^m)}. \]

The vectors \(u_{i,j}^m\) are defined iteratively

\begin{align*} u_{i,1}^1=& u_i,\\ u_{i,1}^2 =& u_i+\tfrac {h}{b-a}\bigg(\int _a^{\xi _1^2}\tilde{l}_1^1(\xi )\mathrm{d}\xi \bigg) f(x_{i,1}^1,u_{i,1}^1)\\ =& u_i+\tfrac {h}{b-a}(\xi _1^2-a)f(x_{i,1}^1,u_{i,1}^1);\\ u_{i,2}^2 =& u_i+\tfrac {h}{b-a}\bigg(\int _a^{\xi _2^2}\tilde{l}_1^1(\xi )\mathrm{d}\xi \bigg) f(x_{i,1}^1,u_{i,1}^1)\\ =& u_i+\tfrac {h}{b-a}(\xi _2^2-a)f(x_{i,1}^1,u_{i,1}^1). \end{align*}

It was taken into account that \(\tilde{l}_1(\xi )=1.\) As a rule

\[ u_{i,k}^{m+1}=u^{m+1}(x_{i,k}^{m+1})=u_i+ \tfrac {h}{b-a}\sum _{j=1}^m\bigg(\int _a^{x_{i,k}^{m+1}}\tilde{l}_j^m(\xi )\mathrm{d}\xi \bigg)f(x_{i,j}^m,u(x_{i,j}^m)), \]

for \(k\in \{ 1,2,\ldots ,m+1\} \) şi \(m\in \mathbb {N}^*.\)

We must compute

\[ u_{i+1}^{m+1}=u^{m+1}(x_{i+1})=u_i+ \tfrac {h}{b-a}\sum _{j=1}^m\bigg(\int _a^b\tilde{l}_j^m(\xi )\mathrm{d}\xi \bigg)f(x_{i,j}^m,u(x_{i,j}^m)), \]

too.

The computation of the vectors \(u_{i,k}^{m+1},\ k\in \{ 1,2,\ldots ,m+1\} ,\ u_{i+1}^{m+1}\) can be written in matrix form. For simplicity we denote

\begin{eqnarray*} w_{j,k}& =& \int _a^{x_{i,k}^{m+1}}\tilde{l}_j^m(\xi )\mathrm{d}\xi ,\quad j\in \{ 1,\ldots ,m\} ,\ k\in \{ 1,\ldots ,m+1\} ,\\ w_j & =& \int _a^b\tilde{l}_j^m(\xi )\mathrm{d}\xi ,\quad j\in \{ 1,\ldots ,m\} \end{eqnarray*}

and the matrix

\[ W=\tfrac {h}{b-a}\left(\begin{array}{cccc} w_{1,1} & w_{2,1} & \ldots & w_{m,1} \\ w_{1,2} & w_{2,2} & \ldots & w_{m,2} \\ \vdots & & & \vdots \\ w_{1,k+1} & w_{2,k+1} & \ldots & w_{m,k+1} \\ w_1 & w_2 & \ldots & w_m \end{array}\right)\in M_{m+2,m}(\mathbb {R}) \]
\[ F=[f(x_{i,1}^m,u_{i,1}^m),f(x_{i,2},u_{i,2}^m),\ldots ,f(x_{i,m}^m,u_{i,m}^m)]\in M_{N,m}(\mathbb {R}) \]

The following equality holds

\[ [u_{i,1}^{m+1},u_{i,2}^{m+1},\ldots ,u_{i,m+1}^{m+1},u_{i+1}^{m+1}]^T={\underbrace{[u_i,\ldots ,u_i]}_{m+2}}^T+W\cdot F^T. \]

For an imposed tolerance \(\varepsilon {\gt}0,\) the iterations occurs until the condition \(\| u_{i+1}^{m+1}-u_i^{m}\| {\lt}\varepsilon \) is fulfilled. The initial approximation is \(u_{i+1}^1=u_i.\) If the above condition is fulfilled then we set \(u_{i+1}=u_{i+1}^{m+1}.\)

A convergence result is given in [ 8 ] .

5 Stiff problems

From (2.3), if \(s=x_0+h\sigma \) then

\[ y(x)=y(x_0)+h\int _0^{\frac{x-x_0}{h}}f(x_0+h\sigma ,y(x_0+h\sigma ))\mathrm{d}\sigma , \]

with \(x\in [x_0,x_0+h]\ \Leftrightarrow \ \sigma \in [0,1].\)

Setting

\[ y(x_0+h\sigma )=y_0+hv(\sigma ) \]

we derive that \(v(0)=0\) and

\[ \tfrac {\mathrm{d}v(\sigma )}{\mathrm{d}\sigma }=f(x_0+h\sigma ,y_0+hv(\sigma ))\quad \Leftrightarrow \quad v(s)=\int _0^sf(x_0+h\sigma ,y_0+hv(\sigma ))\mathrm{d}\sigma . \]

Following [ 4 ] , [ 2 ] , by the stabilization principle, the solution of the partial differential system

\begin{equation} \label{sac100} \tfrac {\partial w(\zeta ,t)}{\partial t}=-w(\zeta ,t)+\int _0^{\zeta }f(x_0+h\sigma ,y_0+h w(\sigma ,t))\mathrm{d}\sigma \end{equation}
5.19

has the property, cf. [ 4 ] , [ 2 ] ,

\begin{equation} \label{sac103} \lim _{t\rightarrow \infty }\| w(\zeta ,t)-v(\zeta )\| =0,\qquad \mbox{for}\quad \zeta \in [0,1]. \end{equation}
5.20

We give a numerical solution to find an approximation of the solution of (5.19).

Let be \(\tau {\gt}0\) and the sequence \(t^n=n\tau ,\ n\in \mathbb {N}.\) The equation (5.19) may be rewritten as

\[ \tfrac {\partial e^tw(\zeta ,t)}{\partial t}=e^t\int _0^{\zeta }f(x_0+h\sigma ,y_0+h w(\sigma ,t))\mathrm{d}\sigma \]

and integrating from \(n\tau \) to \((n\! +\! 1)\tau \) it results

\begin{equation} \label{sac101} w(\zeta ,t^{n\! +\! 1})\! =\! e^{-\tau } w(\zeta ,t^n)\! +\! e^{\! -\! (n\! +\! 1)\tau } \int _{n\tau }^{(n\! +\! 1)\tau }e^{\eta }\bigg(\int _0^{\zeta }f(x_0\! +\! h\sigma ,y_0\! +\! h w(\sigma ,\eta ))\mathrm{d}\sigma \bigg)\mathrm{d}\eta . \end{equation}
5.21

Without changing the notation for \(w,\) we substitute in (5.21) \(f(x_0+h\sigma ,y_0+h w(\sigma ,\eta ))\) by a Lagrange interpolation polynomial

\begin{align} \label{sac102} & w(\zeta ,t^{n+1})=\\ =& e^{-\tau } w(\zeta ,t^n)+ \nonumber \\ & \! +\! e^{-(n+1)\tau } \int _{n\tau }^{(n+1)\tau }e^{\eta }\bigg(\int _0^{\zeta }L(\mathbb {P}_{m-1};\xi _1,\ldots ,\xi _m;f(x_0\! +\! h\ \cdot ,y_0\! +\! hw(\cdot ,\eta )) \mathrm{d}\! \sigma \! \bigg)\mathrm{d}\eta \nonumber \\ =& e^{-\tau } w(\zeta ,t^n)\! +\! e^{-(n\! +\! 1)\tau }\sum _{j=1}^m \int _{n\tau }^{(n+1)\tau }\! \! e^{\eta }\bigg(\int _0^{\zeta }\! f(x_0\! +\! h\xi _j,y_0\! +\! h w(\xi _j,\eta ))l_j(\sigma )\mathrm{d}\! \sigma \! \bigg)\mathrm{d}\eta ,\nonumber \end{align}

where \(0=\xi _1{\lt}\xi _2{\lt}\ldots {\lt}\xi _m=1.\)

We denote \(w^n(\zeta )=w(\zeta ,t^n)\) and in the right hand side of (5.22) we take \(w(\xi _j,\eta )=w^n(\xi _j),\) for any \(j\in \{ 1,2,\ldots ,m\} \) and \(\eta \in [n\tau ,(n+1)\tau ].\) Then

\[ w^{n+1}(\zeta )=e^{-\tau }w^n(\zeta )+(1-e^{-\tau })\sum _{j=1}^mf(x_0+h\xi _j,y_0+h w^n(\xi _j))\int _0^{\zeta }l_j(\sigma )\mathrm{d}\sigma . \]

Denoting \(w^n_j=w^n(\xi _j),\) for \(\zeta =\xi _k,\ k\in \{ 1,2,\ldots ,m\} \) we obtain the iterative relations

\[ w^{n+1}_k=e^{-\tau }w^n_k+(1-e^{-\tau })\sum _{j=1}^mf(x_0+h\xi _j,y_0+h w^n_j)\int _0^{\xi _k}l_j(\sigma )\mathrm{d}\sigma . \]

The iterations occurs until the stopping condition \(\max _{1\le j\le m}\| w^{n+1}_j-w^n_j\| {\lt}\varepsilon \) is fulfilled. Here \(\varepsilon {\gt}0\) is a tolerance. According to (5.20) we consider \(v(1)=w^{n+1}_j\) and the procedure continues with \(u_{i+1}=u_i+h w^{n+1}_m.\)

6 Numerical experiments

Using computer programs based on these methods we solved the following IVPs:

  1. ( [ 9 , p. 234 ] )

    \[ \left\{ \begin{array}{lcl} \dot{y} & = & y\tfrac {4(x+2)^3-y}{(x+2)^4-1},\qquad x\in [0,1],\\ y(0) & =& 15 \end{array}\right. \]

    with the solution \(y(x)=1+(x+2)+(x+2)^2+(x+2)^3.\)

    For \(M=5\) and the tolerance \(\varepsilon =10^{-5}\) the maximum error
    \(\max _{0\le i\le M}\| y(x_i)-u_i\| \) and \(N_f,\) the number of calling the function \(f,\) are given in Table 6..

    Fixed equidistant

    Variable reference set

    reference set \(m=3\)

     

    Error

    \(N_f\)

    Error

    \(N_f\)

    1.82591e-08

    75

    8.94274e-08

    99

    Table 6. Results for Example (1).
  2. ( [ 9 , p. 244 ] )

    \[ \left\{ \begin{array}{lclcl} \dot{y_1} & = & y_2,& \quad & y_1(0)=1, \qquad x\in [0,x_f],\\ \dot{y_2} & = & -\frac{y_1}{r^3},& \quad & y_2(0)=0, \\ \dot{y_3} & = & y_4,& \quad & y_3(0)=0, \\ \dot{y_4} & = & -\frac{y_3}{r^3},& \quad & y_4(0)=1, \end{array}\right. \]

    where \(r=\sqrt{y_1^2+y_3^2}\) and with the solution \(y_1=\cos {x}, y_2=-\sin {x}, y_3=\sin {x}, y_4=\cos {x}.\)

    The results of our numerical experiments are listed in Table 6..

         

    Fixed equidistant

    Variable reference set

         

    reference set \(m=3\)

     

    \(x_f\)

    \(M\)

    \(\varepsilon \)

    Error

    \(N_f\)

    Error

    \(N_f\)

    \(2\pi \)

    10

    \(10^{-5}\)

    0.0247309

    300

    6.47998e-05

    550

    \(2\pi \)

    10

    \(10^{-9}\)

    0.0246415

    480

    2.24345e-09

    1050

    \(4\pi \)

    10

    \(10^{-5}\)

    0.888217

    534

    0.000142862

    966

    \(4\pi \)

    20

    \(10^{-9}\)

    0.0496889

    960

    1.05491e-08

    2100

    \(6\pi \)

    10

    \(10^{-5}\)

    14.4197

    762

    6.23799e-05

    1530

    \(6\pi \)

    40

    \(10^{-9}\)

    0.0232977

    1560

    3.06542e-09

    3640

    Table 6. Results for Example (2).

    Now we compare the results obtained using equidistant nodes and Chebyshev points of second kind for the reference set. For the same example the obtained results are given in Table 6..

         

    Fixed equidistant

    Chebyshev fixed

         

    reference set \(m=5\)

    \(x_f\)

    \(M\)

    \(\varepsilon \)

    Error

    \(N_f\)

    Error

    \(N_f\)

    \(2\pi \)

    10

    \(10^{-5}\)

    6.93002e-05

    400

    2.69646e-05

    400

    \(2\pi \)

    10

    \(10^{-9}\)

    1.91509e-05

    650

    8.13527e-06

    650

    \(4\pi \)

    10

    \(10^{-5}\)

    0.00215349

    600

    0.000338729

    551

    \(4\pi \)

    20

    \(10^{-9}\)

    3.85763e-05

    1300

    1.6391e-05

    1300

    \(6\pi \)

    10

    \(10^{-5}\)

    0.0275954

    900

    0.0164587

    820

    \(6\pi \)

    40

    \(10^{-9}\)

    1.00764e-05

    2200

    4.18516e-06

    2200

    Table 6. Results for Example (2).

    As expected, the results using Chebyshev points of second kind are better than that obtained using equidistant nodes, due to the better approximation property of Lagrange interpolation polynomial with Chebyshev points of second kind toward the equidistant points, [ 11 ] .

  3. ( [ 9 , p. 245 ] ) Keeping the differential system as in the previous example but changing the initial value conditions to \(y(0)=[0.4,0,0,2],\) for \(x_f=2\pi ,\ M=20\) and \(\varepsilon =10^{-9}\) with the method based on variable reference set we obtained \(\max _{0\le i\le M}\| y(x_i)-u_i\| =2.94126\cdot 10^{-9}\) and \(N_f=1400.\)

    In this case the solution is

    \[ y(x)=[\cos {u}-0.6, \tfrac {-\sin {u}}{1-0.6 \cos {u}}, 0.8\sin {u}, \tfrac {0.8\cos {u}}{1-0.6\cos {u}}], \]

    where \(x=u-0.6\sin {u}.\)

    Based on the previous examples the method with variable number of reference points is more efficient than the method with fixed number reference points, but we cannot deduce theoretically such a conclusion.

Using the method for stiff problems presented above we solved:

  1. \[ \left\{ \begin{array}{lclcl} \dot{y_1} & = & 998y_1+1998 y_2,& \quad & y_1(0)=1, \qquad x\in [0,1],\\ \dot{y_2} & = & -999y_1-1999 y_2,& \quad & y_2(0)=0, \\ \end{array}\right. \]

    with the solution \(y_1=2e^{-x}-e^{-1000x}, y_2=-e^{-x}+e^{-1000x}.\)

    For \(\tau =10\) the results are given in Table 6.. We recall that \(\tau \) is the length of the step for the \(t\) variable of the stabilization principle.

       

    Fixed equidistant

    Chebyshev fixed

       

    reference set \(m=5\)

    \(M\)

    \(\varepsilon \)

    Error

    \(N_f\)

    Error

    \(N_f\)

    300

    \(10^{-5}\)

    0.00164977

    8585

    0.000402419

    8435

    500

    \(10^{-7}\)

    0.000128781

    10700

    4.35037e-05

    10555

    Table 6. Results for Example (4).
  2. \[ \begin{array}{lcl} \dot{y} & = & -20y, \qquad y(0)=1,\quad x\in [0,1]. \end{array} \]

    For \(\tau =10, M=20\) and \(\varepsilon =10^{-7}\) the results are given in Table 6..

    Fixed equidistant

    Chebyshev fixed

    reference set \(m=5\)

    Error

    \(N_f\)

    Error

    \(N_f\)

    1.19382e-06

    800

    4.58431e-07

    785

    Table 6. Results for Example 5.

To make the results reproducible we provide some code at https://github.com/e-scheiber/scilab-ivpsolvers.git.

Acknowledgement

The author wishes to thank the referee for suggesting many improvements of this paper.

Bibliography

1

X. Bai, Modified Chebyshev-Picard Iteration Methods for Solution of Initial Value and Boundary Value Problems. PhD Dissertation, 2010, Texas A&M University.

2

V.V. Bobkov, B.V. Faleichik, P.A. Mandrik, V.I. Repnikov, Solving Stiff Problems Using Generalized Picard Iterations, AIP Conference Proceeding 1168, 65 (2009), \includegraphics[scale=0.1]{ext-link.png}

3

F.M. Causley, C.D. Seal, On the Convergence of Spectral Deferred Correction Methods. arXiv:1706.06245v1 [math.NA], 2017.

4

B.V. Faleichik, Analytic Iterativ Processes and Numerical Algorithms for Stiff Problems. Computational Methods in Applied Mathematics, 8 (2008) no. 2, 116–129. \includegraphics[scale=0.1]{ext-link.png}

5

T. Fukushima, Picard Iteration Method, Chebyshev Polynomial Approximation, and Global Numerical Integration of Dynamical Motions. The Astronomical J., 113 (1997) no. 5, 1909–1914. \includegraphics[scale=0.1]{ext-link.png}

6

T. Fukushima, Vector integration of dynamical motions by the Picard-Chebyshev method, The Astronomical J., 113 (1997) no. 6, 2325–2328. \includegraphics[scale=0.1]{ext-link.png}

7

E. Hairer, G. Wanner, S. Nørsett, Solving Ordinary Differential Equations. I Nonstiff Problems, Second Ed, Springer, Berlin, 1993.

8

E. Scheiber, A multistep method to solve the initial value problem. The 4th Romanian-German Seminar on Approximation Theory and its Applications. Braşov, 2000 (ed. H. Gonska, D. Kacso, L. Beutel) Gerhard Mercator Universitat, Duisburg, 124–132.

9

L.F. Shampine, M.K. Gordon, Computer solution of ordinary differential equation. The initial value problem, W.H. Freeman and Company, San Francisco, 1975.

10

J. Stoer, R. Bulirsch, Introduction to Numerical Analysis, Springer-Verlag, New York, 1993.

11

N.L. Trefethen, ApproximationTheory and Approximation Practice, SIAM, Philadelphia, 2012.

12

***, http://mathfaculty.fullerton.edu/mathews/n2003/PicardIterationMod.html, 2017.