The implicit methods for numerical solving of ODEs lead to nonlinear equations which are usually solved by the Newton method. We study the use of a Steffensen type method instead, and we give conditions under which this method provides bilateral approximations for the solution of these equations; this approach offers a more rigorous control of the errors. Moreover, the method can be applied even in the case when certain functions are not differentiable on the definition domain. The convergence order is the same as for Newton method.
Authors
Flavius Patrulescu
(Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy)
Keywords
initial value problems; stiff equations; Steffensen method; Newton method; convergence order
Cite this paper as
F. Pătrulescu, Steffensen type methods for approximating solutions of differential equations, Studia Universitatis Babeş-Bolyai, seria Mathematica, vol. 56, no. 2 (2011), pp. 505-515.
[1] Agratini, I. Chiorean, Gh. Coman, R. Trimbitas, Numerical Analysis and Approximation Theory III, Presa Universitara Clujeana, Cluj-Napoca (2002) .
[2] C. Butcher, Numerical Methods for Ordinary Differential Equations, John Willey & Sons, Chichester (2003).
[3] Catinas, Methods of Newton and Newton-Krylov Type, Risoprint, Cluj-Napoca (2007).
[4] Crouzeix, A.L. Mignot, Analyse numerique des equations differentielles, Masson, Paris (1989).
[5] D. Lambert, Numerical Methods for Ordinary Differential Systems. The Initial Value Problem, John Wiley & Sons, Chichester (1990).
[6] Matheij, J. Molennar, Ordinary Differential Equations in Theory and Practice, SIAM, Philadelphia (2002).
[7] Pavaloiu, On the monotonicity of the sequences of approximations obtained by Steffensen’s method, Mathematica, 35, no. 1 (1993), 95-101.
[8] Pavaloiu,Bilateral approximation for the solution of scalar Equations, Rev. Anal. Numer. Theor. Approx.,23, no. 1 (1994), 95-101.
[9] Pavaloiu, Approximation of the roots of equations by Aitken-Steffensen-type monotonic sequences, Calcolo, 32 (1995), 69-82.
[10] Pavaloiu, Aitken-Steffensen type methods for nonsmooth functions(III), Rev. Anal. Numer. Theor. Approx., 32, no. 1 (2003), 73-77.
Steffensen type methods for approximating solutions of differential equations
F. Pătrulescu
Tiberiu Popoviciu Institute of Numerical AnalysisP.O. Box 68-1, 400110 Cluj-Napoca, Romania, fpatrulescu@ictp.acad.ro
2010 Mathematics Subject Classification : 65L04,65L05,65H0565 \mathrm{~L} 04,65 \mathrm{~L} 05,65 \mathrm{H} 05
Keywords: initial value problems, stiff equations, Steffensen method, Newton method, convergence order
Abstract
The implicit methods for numerical solving of ODEs lead to nonlinear equations which are usually solved by the Newton method. We study the use of a Steffensen type method instead, and we give conditions under which this method provides bilateral approximations for the solution of these equations; this approach offers a more rigorous control of the errors. Moreover, the method can be applied even in the case when certain functions are not differentiable on the definition domain. The convergence order is the same as for Newton method.
1 Introduction
The mathematical modeling of many problems in physics, engineering, chemistry, biology, etc. gives rise to ordinary differential equations or systems of ordinary differential equations.
It is known that a high-order initial value problem (IVP) for differential equations or systems of equations can be rewritten as a first-order IVP system (see e.g. [4, [5]) so that the standard IVP can be written in the form:
{:(1){[y^(')=f(x","y)","quad x in I],[y(a)=y_(0)","]:}:}\left\{\begin{array}{l}
y^{\prime}=f(x, y), \quad x \in I \tag{1}\\
y(a)=y_{0},
\end{array}\right.
where: I subeR,y_(0)inR^(m),f:I xxR^(m)rarrR^(m)I \subseteq \mathbb{R}, y_{0} \in \mathbb{R}^{m}, f: I \times \mathbb{R}^{m} \rightarrow \mathbb{R}^{m} and a in Ia \in I.
A solution is sought on the interval [a,b]sub I[a, b] \subset I, where a,ba, b are finite. In this paper we consider only the scalar case, i.e., m=1m=1.
In practice, the number of cases where an exact solution can be found by analytical means is very limited, so that one uses numerical methods for the approximation of the solution.
Integrating (1), for m=1m=1, using an implicit linear multistep method with step-size hh, leads to the solving at each step of an equation of the form:
{:(2)y=hA phi(x","y)+psi.:}\begin{equation*}
y=h A \phi(x, y)+\psi . \tag{2}
\end{equation*}
Here AA is a constant determined by the numerical method and psi\psi is a known value.
This equation can be solved by the fixed point iteration
where LL is the Lipschitz constant of phi\phi with respect to the second variable.
Condition (4) becomes too restrictive for stiff problems. Thus, if we use an explicit method to solve a stiff equation, we have to use an excessively small step-size to avoid instability; if we use an implicit method with an absolute stability region large enough to impose no stability restriction, we can choose a step-size as large as we want, but we will not be able to solve the implicit equation (2) by the iteration (33) unless the step-size is excessively small.
In order to overcome this difficulty one uses the Newton iteration instead of the fixed point iteration. Newton iteration applied to the equation:
One step of Newton iteration requests considerably more computing time than one step of fixed point iteration. Each step of the latter costs one function evaluation, whereas each step of the former calls for the updating of the derivative.
In this paper we approximate the solution of equation (5) using the Steffensen type method:
is equivalent to (5), and [u,v;F][u, v ; F] represents the first order divided difference of FF at the points u,v in[c,d]u, v \in[c, d]. This method does not require the calculation of the derivative of the function FF.
Let y^(**)in(c,d)y^{*} \in(c, d) be the root of equation (5). If the elements of the sequence (y^((nu)))_(nu >= 0)\left(y^{(\nu)}\right)_{\nu \geq 0} belong to the interval [c,d][c, d] then from Newton identity and (9) we obtain:
where [u,v,w;F][u, v, w ; F] represents the second order divided difference of FF at the points u,v,w in[c,d]u, v, w \in[c, d]. If gg is Lipschitz on [c,d][c, d] with constant LL and if we assume that there exist the real numbers M,m > 0M, m>0 such that:
|[u,v,w;F]| < M quad" and "quad|[u,v;F]| > m,|[u, v, w ; F]|<M \quad \text { and } \quad|[u, v ; F]|>m,
which shows that the qq-convergence order for the method (9) is 2 , i.e., the same as for the Newton method.
In 77 are given conditions for the convergence of the sequences generated by relation (9), and the function gg is defined such that the sequences (y^((nu)))_(nu >= 0)\left(y^{(\nu)}\right)_{\nu \geq 0} and (g(y^((nu))))_(nu >= 0)\left(g\left(y^{(\nu)}\right)\right)_{\nu \geq 0} approximate bilaterally the exact solution y^(**)y^{*}. Thus, we have an a posteriori error control.
For the functions FF and gg we suppose the following hypothesis:
( alpha\alpha ) the equations (5) and (10) are equivalent; (beta)(\beta) the function gg is continuous and decreasing on [c,d][c, d];
( gamma\gamma ) the equation (5) has a unique solution y^(**)in(c,d)y^{*} \in(c, d).
The following theorem holds (see [7]):
Theorem 1 If the functions FF and gg satisfy the conditions (alpha)-(gamma)(\alpha)-(\gamma) and moreover the following conditions hold:
(i) FF is increasing and convex on [c,d][c, d];
(ii) F(y_(0)) < 0F\left(y_{0}\right)<0;
(iii) g(y_(0)) <= dg\left(y_{0}\right) \leq d,
then the elements of the sequences (y^((nu)))_(nu >= 0)\left(y^{(\nu)}\right)_{\nu \geq 0} and (g(y^((nu))))_(nu >= 0)\left(g\left(y^{(\nu)}\right)\right)_{\nu \geq 0} belong to the interval [c,d][c, d] and the following properties hold:
( jj ) the sequence (y^((nu)))_(nu >= 0)\left(y^{(\nu)}\right)_{\nu \geq 0} is increasing and convergent;
(jj) the sequence (g(y^((nu))))_(nu >= 0)\left(g\left(y^{(\nu)}\right)\right)_{\nu \geq 0} is decreasing and convergent;
(jjj) y^((nu)) <= y^(**) <= g(y^((nu))),quad nu=0,1,dotsy^{(\nu)} \leq y^{*} \leq g\left(y^{(\nu)}\right), \quad \nu=0,1, \ldots.
(jv) lim_(nu rarr oo)y^((nu))=lim_(nu rarr oo)g(y^((nu)))=y^(**)\lim _{\nu \rightarrow \infty} y^{(\nu)}=\lim _{\nu \rightarrow \infty} g\left(y^{(\nu)}\right)=y^{*};
(vj) |y^(**)-y^((nu))| <= |g(y^((nu)))-y^((nu))|,quad nu=0,1,dotsquad\left|y^{*}-y^{(\nu)}\right| \leq\left|g\left(y^{(\nu)}\right)-y^{(\nu)}\right|, \quad \nu=0,1, \ldots \quad.
In the above theorem the auxiliary function gg can be taken as: g(y)=y-(F(y))/(F^(')(c))g(y)=y-\frac{F(y)}{F^{\prime}(c)}. Similar results have been obtained in [7] if FF verifies the properties:
-F is increasing and concave; gg can be taken as g(y)=y-(F(y))/(F^(')(d))g(y)=y-\frac{F(y)}{F^{\prime}(d)};
-F is decreasing and concave; gg can be taken as g(y)=y-(F(y))/(F^(')(c))g(y)=y-\frac{F(y)}{F^{\prime}(c)};
-F is decreasing and convex; gg can be taken as g(y)=y-(F(y))/(F^(')(d))g(y)=y-\frac{F(y)}{F^{\prime}(d)}.
The interval [a,b][a, b] is partitioned by the point set {x_(n)}\left\{x_{n}\right\} defined by x_(n)=a+nh,n=0,1,dots,N,h=(b-a)//N,quadx_{n}=a+n h, n=0,1, \ldots, N, h=(b-a) / N, \quad and quady_(n)quad\quad y_{n} \quad denotes an approximation to the exact solution yy of (1) at x_(n)x_{n}.
If we use an implicit linear multistep method then y_(n),n=1,dots,Ny_{n}, n=1, \ldots, N, are the solutions of the equation:
{:(11)y=hA phi(x_(n),y)+psi_(n):}\begin{equation*}
y=h A \phi\left(x_{n}, y\right)+\psi_{n} \tag{11}
\end{equation*}
where psi_(n)=psi_(n)(a,h,y_(n-1),y_(n-2),dots,y_(0))\psi_{n}=\psi_{n}\left(a, h, y_{n-1}, y_{n-2}, \ldots, y_{0}\right). We call this equation as approximant equation and we denote by y_(n)^(**)in(c,d),n=1,dots,Ny_{n}^{*} \in(c, d), n=1, \ldots, N, the exact solution.
For each n=1,dots,Nn=1, \ldots, N let F_(n):[c,d]rarrRF_{n}:[c, d] \rightarrow \mathbb{R} be defined by
{:(12)F_(n)(y)=y-hA phi(x_(n),y)-psi_(n):}\begin{equation*}
F_{n}(y)=y-h A \phi\left(x_{n}, y\right)-\psi_{n} \tag{12}
\end{equation*}
Then equation (11) can be rewritten in the form F_(n)(y)=0F_{n}(y)=0.
To approximate bilaterally the solution y_(n)^(**),n=1,dots,Ny_{n}^{*}, n=1, \ldots, N, we generate the sequence (y_(n)^((nu)))_(nu >= 0)\left(y_{n}^{(\nu)}\right)_{\nu \geq 0}, by:
From Theorem 1.1, if F_(n)F_{n} is increasing and convex, and the initial guess y_(n)^((0))y_{n}^{(0)} satisfy F_(n)(y_(n)^((0))) < 0F_{n}\left(y_{n}^{(0)}\right)<0, then the sequences (y_(n)^((nu)))_(nu >= 0),(g(y_(n)^((nu))))_(nu >= 0)\left(y_{n}^{(\nu)}\right)_{\nu \geq 0},\left(g\left(y_{n}^{(\nu)}\right)\right)_{\nu \geq 0} converge to y_(n)^(**)y_{n}^{*} and we have the inequalities:
The rest of the cases can be treated in a similar fashion.
2 Application to the trapezoidal rule
We consider the trapezoidal rule to integrate the initial value problem (1), for m=1m=1, and the Steffensen method described above to solve the approximant equation (11).
The trapezoidal rule is a 1 -step Adams-Moulton method (an implicit method), and for (1) is defined by:
y_(n)=y_(n-1)+(h)/(2)(f(x_(n),y_(n))+f(x_(n-1),y_(n-1))),n=1,dots,N.y_{n}=y_{n-1}+\frac{h}{2}\left(f\left(x_{n}, y_{n}\right)+f\left(x_{n-1}, y_{n-1}\right)\right), n=1, \ldots, N .
It is known that the trapezoidal rule is an AA-stable method and has order 2 (see [5]).
For any point x_(n),n=1,dots,Nx_{n}, n=1, \ldots, N, we have:
and in this case F_(n)(y)=y-(h)/(2)f(x_(n),y)-(h)/(2)f(x_(n-1),y_(n-1))-y_(n-1)F_{n}(y)=y-\frac{h}{2} f\left(x_{n}, y\right)-\frac{h}{2} f\left(x_{n-1}, y_{n-1}\right)-y_{n-1}. Thus, in (11) we have A=(1)/(2)A=\frac{1}{2}, phi(x_(n),y)=f(x_(n),y)\phi\left(x_{n}, y\right)=f\left(x_{n}, y\right) and psi_(n)=(h)/(2)f(x_(n-1),y_(n-1))+y_(n-1),n=1,dots,N\psi_{n}=\frac{h}{2} f\left(x_{n-1}, y_{n-1}\right)+y_{n-1}, n=1, \ldots, N.
For simplicity we consider only the autonomous case, i.e. f=f(y)f=f(y), and in this case equation (15) becomes:
and F_(n)(y)=y-(h)/(2)f(y)-psi_(n),psi_(n)=(h)/(2)f(y_(n-1))+y_(n-1),n=1,dots,NF_{n}(y)=y-\frac{h}{2} f(y)-\psi_{n}, \psi_{n}=\frac{h}{2} f\left(y_{n-1}\right)+y_{n-1}, n=1, \ldots, N.
Using the fact that
{:(17)[u,v;F_(n)]=1-(h)/(2)[u","v;f]","" for all "u","v in[c","d]",":}\begin{equation*}
\left[u, v ; F_{n}\right]=1-\frac{h}{2}[u, v ; f], \text { for all } u, v \in[c, d], \tag{17}
\end{equation*}
and
{:(18)[u,v,w;F_(n)]=-(h)/(2)[u","v","w;f]","" for all "u","v","w in[c","d]:}\begin{equation*}
\left[u, v, w ; F_{n}\right]=-\frac{h}{2}[u, v, w ; f], \text { for all } u, v, w \in[c, d] \tag{18}
\end{equation*}
n=1,dots,Nn=1, \ldots, N, we obtain that the auxiliary function gg can be taken as (see 10):
g(y)=((h)/(2)(f(y)-y[d-epsi,d;f])+psi_(n))/(1-(h)/(2)[d-epsi,d;f]);g(y)=\frac{\frac{h}{2}(f(y)-y[d-\varepsilon, d ; f])+\psi_{n}}{1-\frac{h}{2}[d-\varepsilon, d ; f]} ;
where epsi\varepsilon is sufficiently small such that the exact solution y_(n)^(**)y_{n}^{*} of the equation F_(n)(y_(n))=0,n=1,dots,NF_{n}\left(y_{n}\right)=0, n=1, \ldots, N, belongs to the interval [c+epsi,d-epsi][c+\varepsilon, d-\varepsilon].
We are lead to the main results of this work:
Theorem 2 If the function ff, the step-size hh, and the initial guesses y_(n)^((0)),n=1,dots,Ny_{n}^{(0)}, n=1, \ldots, N, satisfy the following conditions:
(i) [u,v,w,f] <= 0[u, v, w, f] \leq 0, for all u,v,w in[c,d]u, v, w \in[c, d];
(ii) ( m <= [u,v,f] <= M <= 0m \leq[u, v, f] \leq M \leq 0, for all u,v in[c,d]u, v \in[c, d] ) or ( 0 <= m <= [u,v,f] <= M0 \leq m \leq[u, v, f] \leq M, for all u,v in[c,d]u, v \in[c, d], and {:h <= (2)/(M));\left.h \leq \frac{2}{M}\right) ;
(iii) y_(n)^((0))-(h)/(2)f(y_(n)^((0))) < psi_(min)^(n)y_{n}^{(0)}-\frac{h}{2} f\left(y_{n}^{(0)}\right)<\psi_{\min }^{n};
(iv) y_(n)^((0))M-f(y_(n)^((0))) >= (2)/(h)[d(M(h)/(2)-1)+psi_(max)^(n)]y_{n}^{(0)} M-f\left(y_{n}^{(0)}\right) \geq \frac{2}{h}\left[d\left(M \frac{h}{2}-1\right)+\psi_{\max }^{n}\right],
then the elements of the sequences (y_(n)^((nu)))_(nu >= 0),(g(y_(n)^((nu)))_(nu >= 0),n=1,dots,N:}\left(y_{n}^{(\nu)}\right)_{\nu \geq 0},\left(g\left(y_{n}^{(\nu)}\right)_{\nu \geq 0}, n=1, \ldots, N\right., belong to the interval [c,d][c, d] and the following properties hold:
(j) (y_(n)^((nu)))_(nu >= 0)\left(y_{n}^{(\nu)}\right)_{\nu \geq 0} is increasing and convergent;
(jj) (g(y_(n)^((nu))))_(nu >= 0)\left(g\left(y_{n}^{(\nu)}\right)\right)_{\nu \geq 0} is decreasing and convergent;
(jjj) y_(n)^((nu)) <= y_(n)^(**) <= g(y_(n)^((nu))),nu=0,1,dotsy_{n}^{(\nu)} \leq y_{n}^{*} \leq g\left(y_{n}^{(\nu)}\right), \nu=0,1, \ldots; (jv)lim_(nu rarr oo)y_(n)^((nu))=lim_(nu rarr oo)g(y_(n)^((nu)))=y_(n)^(**)(j v) \lim _{\nu \rightarrow \infty} y_{n}^{(\nu)}=\lim _{\nu \rightarrow \infty} g\left(y_{n}^{(\nu)}\right)=y_{n}^{*};
(v) |y_(n)^(**)-y_(n)^((nu))| <= |g(y_(n)^((nu)))-y_(n)^((nu))|,quad nu=0,1,dotsquad\left|y_{n}^{*}-y_{n}^{(\nu)}\right| \leq\left|g\left(y_{n}^{(\nu)}\right)-y_{n}^{(\nu)}\right|, \quad \nu=0,1, \ldots \quad.
Proof: 1 From (17), (18) and (i), (ii) we have [u,v;F_(n)] >= 0,[u,v,w;F_(n)] >= 0,n=1,dots,N\left[u, v ; F_{n}\right] \geq 0,\left[u, v, w ; F_{n}\right] \geq 0, n=1, \ldots, N, for all u,v,w in[c,d]u, v, w \in[c, d], and we deduce that F_(n)F_{n} is increasing and convex.
Also, from (iii) and (iv) we obtain that the initial guesses satisfy the inequalities: F(y_(n)^((0))) < 0F\left(y_{n}^{(0)}\right)<0 and g(y_(n)^((0))) <= d,n=1,dots,Ng\left(y_{n}^{(0)}\right) \leq d, n=1, \ldots, N.
Using Theorem 1.1 we deduce that the properties (j)-(v)(j)-(v) hold.
The following theorems can be proved in a similar manner:
Theorem 3 If the function ff, the step-size hh, and the initial guesses y_(n)^((0)),n=1,dots,Ny_{n}^{(0)}, n=1, \ldots, N, satisfy the following conditions:
(i) [u,v,w,f] <= 0[u, v, w, f] \leq 0, for all u,v,w in[c,d]u, v, w \in[c, d];
(ii) 0 <= m <= [u,v,f] <= M0 \leq m \leq[u, v, f] \leq M, for all u,v in[c,d]u, v \in[c, d];
(iii) y_(n)^((0))-(h)/(2)f(y_(n)^((0))) < psi_(min)^(n)y_{n}^{(0)}-\frac{h}{2} f\left(y_{n}^{(0)}\right)<\psi_{\min }^{n};
(iv) y_(n)^((0))m-f(y_(n)^((0))) >= (2)/(h)[c(m(h)/(2)-1)+psi_(max)^(n)]y_{n}^{(0)} m-f\left(y_{n}^{(0)}\right) \geq \frac{2}{h}\left[c\left(m \frac{h}{2}-1\right)+\psi_{\max }^{n}\right];
(v) (2)/(m) <= h\frac{2}{m} \leq h,
then the elements of the sequences (y_(n)^((nu)))_(nu >= 0),(g(y_(n)^((nu)))_(nu >= 0),n=1,dots,N:}\left(y_{n}^{(\nu)}\right)_{\nu \geq 0},\left(g\left(y_{n}^{(\nu)}\right)_{\nu \geq 0}, n=1, \ldots, N\right., belong to the interval [c,d][c, d] and the following properties hold:
(j) (y_(n)^((nu)))_(nu >= 0)\left(y_{n}^{(\nu)}\right)_{\nu \geq 0} is decreasing and convergent;
(jj) (g(y_(n)^((nu))))_(nu >= 0)\left(g\left(y_{n}^{(\nu)}\right)\right)_{\nu \geq 0} is increasing and convergent; (( jjj )g(y_(n)^((nu))) <= y_(n)^(**) <= y_(n)^((nu)),nu=0,1,dots) g\left(y_{n}^{(\nu)}\right) \leq y_{n}^{*} \leq y_{n}^{(\nu)}, \nu=0,1, \ldots; (jv)lim_(nu rarr oo)y_(n)^((nu))=lim_(nu rarr oo)g(y_(n)^((nu)))=y_(n)^(**);(j v) \lim _{\nu \rightarrow \infty} y_{n}^{(\nu)}=\lim _{\nu \rightarrow \infty} g\left(y_{n}^{(\nu)}\right)=y_{n}^{*} ;
(v) |y_(n)^(**)-y_(n)^((nu))| <= |g(y_(n)^((nu)))-y_(n)^((nu))|,quad nu=0,1,dotsquad\left|y_{n}^{*}-y_{n}^{(\nu)}\right| \leq\left|g\left(y_{n}^{(\nu)}\right)-y_{n}^{(\nu)}\right|, \quad \nu=0,1, \ldots \quad.
Theorem 4 If the function ff, the step-size hh and the initial guesses y_(n)^((0)),n=1,dots,Ny_{n}^{(0)}, n=1, \ldots, N, satisfy the following conditions:
(i) [u,v,w,f] >= 0[u, v, w, f] \geq 0, for all u,v,w in[c,d]u, v, w \in[c, d];
(ii) ( m <= [u,v,f] <= M <= 0m \leq[u, v, f] \leq M \leq 0, for all u,v in[c,d]u, v \in[c, d] ) or ( 0 <= m <= [u,v,f] <= M0 \leq m \leq[u, v, f] \leq M, for all u,v in[c,d]u, v \in[c, d], and {:h <= (2)/(M));\left.h \leq \frac{2}{M}\right) ;
(iii) y_(n)^((0))-(h)/(2)f(y_(n)^((0))) > psi_(max)^(n)y_{n}^{(0)}-\frac{h}{2} f\left(y_{n}^{(0)}\right)>\psi_{\max }^{n};
(iv) y_(n)^((0))M-f(y_(n)^((0))) <= (2)/(h)[c(M(h)/(2)-1)+psi_(min)^(n)]y_{n}^{(0)} M-f\left(y_{n}^{(0)}\right) \leq \frac{2}{h}\left[c\left(M \frac{h}{2}-1\right)+\psi_{\min }^{n}\right],
then the elements of the sequences (y_(n)^((nu)))_(nu >= 0),(g(y_(n)^((nu)))_(nu >= 0),n=1,dots,N:}\left(y_{n}^{(\nu)}\right)_{\nu \geq 0},\left(g\left(y_{n}^{(\nu)}\right)_{\nu \geq 0}, n=1, \ldots, N\right., belong to the interval [c,d][c, d] and the following properties hold:
(j) (y_(n)^((nu)))_(nu >= 0)\left(y_{n}^{(\nu)}\right)_{\nu \geq 0} is decreasing and convergent;
(jj) (g(y_(n)^((nu))))_(nu >= 0)\left(g\left(y_{n}^{(\nu)}\right)\right)_{\nu \geq 0} is increasing and convergent; (jjj)g(y_(n)^((nu))) <= y_(n)^(**) <= y_(n)^((nu)),nu=0,1,dots;(j j j) g\left(y_{n}^{(\nu)}\right) \leq y_{n}^{*} \leq y_{n}^{(\nu)}, \nu=0,1, \ldots ; (jv)lim_(nu rarr oo)y_(n)^((nu))=lim_(nu rarr oo)g(y_(n)^((nu)))=y_(n)^(**);(j v) \lim _{\nu \rightarrow \infty} y_{n}^{(\nu)}=\lim _{\nu \rightarrow \infty} g\left(y_{n}^{(\nu)}\right)=y_{n}^{*} ;
(v) |y_(n)^(**)-y_(n)^((nu))| <= |g(y_(n)^((nu)))-y_(n)^((nu))|,quad nu=0,1,dotsquad\left|y_{n}^{*}-y_{n}^{(\nu)}\right| \leq\left|g\left(y_{n}^{(\nu)}\right)-y_{n}^{(\nu)}\right|, \quad \nu=0,1, \ldots \quad.
Theorem 5 If the function ff, the step-size hh and the initial guesses y_(n)^((0)),n=1,dots,Ny_{n}^{(0)}, n=1, \ldots, N, satisfy the following conditions:
(i) [u,v,w,f] >= 0[u, v, w, f] \geq 0, for all u,v,w in[c,d]u, v, w \in[c, d];
(ii) 0 <= m <= [u,v,f] <= M0 \leq m \leq[u, v, f] \leq M, for all u,v in[c,d]u, v \in[c, d];
(iii) y_(n)^((0))-(h)/(2)f(y_(n)^((0))) > psi_(max)^(n)y_{n}^{(0)}-\frac{h}{2} f\left(y_{n}^{(0)}\right)>\psi_{\max }^{n};
(iv) y_(n)^((0))m-f(y_(n)^((0))) <= (2)/(h)[d(m(h)/(2)-1)+psi_(min)^(n)]y_{n}^{(0)} m-f\left(y_{n}^{(0)}\right) \leq \frac{2}{h}\left[d\left(m \frac{h}{2}-1\right)+\psi_{\min }^{n}\right];
(v) (2)/(m) <= h\frac{2}{m} \leq h,
then the elements of the sequences (y_(n)^((nu)))_(nu >= 0),(g(y_(n)^((nu)))_(nu >= 0),n=1,dots,N:}\left(y_{n}^{(\nu)}\right)_{\nu \geq 0},\left(g\left(y_{n}^{(\nu)}\right)_{\nu \geq 0}, n=1, \ldots, N\right., belong to the interval [c,d][c, d] and the following properties hold:
(j) (y_(n)^((nu)))_(nu >= 0)\left(y_{n}^{(\nu)}\right)_{\nu \geq 0} is increasing and convergent;
(jj) (g(y_(n)^((nu))))_(nu >= 0)\left(g\left(y_{n}^{(\nu)}\right)\right)_{\nu \geq 0} is decreasing and convergent; (jjj)y_(n)^((nu)) <= y_(n)^(**) <= g(y_(n)^((nu))),nu=0,1,dots(j j j) y_{n}^{(\nu)} \leq y_{n}^{*} \leq g\left(y_{n}^{(\nu)}\right), \nu=0,1, \ldots; (jv)lim_(nu rarr oo)y_(n)^((nu))=lim_(nu rarr oo)g(y_(n)^((nu)))=y_(n)^(**)(j v) \lim _{\nu \rightarrow \infty} y_{n}^{(\nu)}=\lim _{\nu \rightarrow \infty} g\left(y_{n}^{(\nu)}\right)=y_{n}^{*}; (v)|y_(n)^(**)-y_(n)^((nu))| <= |g(y_(n)^((nu)))-y_(n)^((nu))|,quad nu=0,1,dotsquad(v)\left|y_{n}^{*}-y_{n}^{(\nu)}\right| \leq\left|g\left(y_{n}^{(\nu)}\right)-y_{n}^{(\nu)}\right|, \quad \nu=0,1, \ldots \quad.
3 Numerical example
We consider the autonomous initial value problem:
{:(19){[y^(')(x)=cos^(2)(y(x))","quad x in[0","1]],[y(0)=0]:}:}\left\{\begin{array}{l}
y^{\prime}(x)=\cos ^{2}(y(x)), \quad x \in[0,1] \tag{19}\\
y(0)=0
\end{array}\right.
The exact solution is y:[0,1]rarrR,y(x)=arctan(x)y:[0,1] \rightarrow \mathbb{R}, y(x)=\arctan (x), and it is plotted in Figure 1(a)1(a) with continuous line.
If we use the trapezoidal rule to integrate the above initial value problem we must solve for each mesh point x_(n)=nh,n=1,dots,N,h=1//N,N inNx_{n}=n h, n=1, \ldots, N, h=1 / N, N \in \mathbb{N}, the nonlinear equation:
where x_(0)=0x_{0}=0 and we choose y_(0)=0y_{0}=0.
According to the above sections we can write (20) in the form
F_(n)(y)=0F_{n}(y)=0
where F_(n)(y)=y-(h)/(2)cos^(2)(y)-psi_(n)F_{n}(y)=y-\frac{h}{2} \cos ^{2}(y)-\psi_{n}, and psi_(n)=y_(n-1)+(h)/(2)cos^(2)(y_(n-1)),n=1dots,N\psi_{n}=y_{n-1}+\frac{h}{2} \cos ^{2}\left(y_{n-1}\right), n=1 \ldots, N. It is easy to show that equation (20) has a unique solution y_(n)^(**)quad inquad(0,(pi)/(4))y_{n}^{*} \quad \in \quad\left(0, \frac{\pi}{4}\right), n=1,dots,Nn=1, \ldots, N, and we will use a Steffensen type method to obtain a numerical approximation, tilde(y)_(n)\tilde{y}_{n}, for this solution.
From F_(n)^(')(y)=1+(h)/(2)sin(2y) >= 0F_{n}^{\prime}(y)=1+\frac{h}{2} \sin (2 y) \geq 0 and F_(n)^('')(y)=h cos(2y) >= 0,y in[0,(pi)/(4)],n=1,dots,NF_{n}^{\prime \prime}(y)=h \cos (2 y) \geq 0, y \in\left[0, \frac{\pi}{4}\right], n=1, \ldots, N, we deduce that F_(n)F_{n} is increasing and convex. Thus, we can define the decreasing function gg as:
g(y)=y-(F_(n)(y))/(F_(n)^(')(0))=y-F_(n)(y)=(h)/(2)cos^(2)(y)+psi_(n),quad n=1,dots,Ng(y)=y-\frac{F_{n}(y)}{F_{n}^{\prime}(0)}=y-F_{n}(y)=\frac{h}{2} \cos ^{2}(y)+\psi_{n}, \quad n=1, \ldots, N
Also, from Theorem 2.1, choosing for each n=1,dots,Nn=1, \ldots, N the initial guesses y_(n)^((0))y_{n}^{(0)} such that it verifies the conditions (iii) and (iv) we obtain bilateral approximations of the solution y_(n)^(**)y_{n}^{*} and an a posteriori error control.
The numerical solution, obtained with the method described above, for the step size h=0.05h=0.05 is also plotted in Figure 1(a) with circle marker. The values of the errors epsi_(n)=|y(x_(n))- tilde(y)_(n)|,n=1,dots,N\varepsilon_{n}=\left|y\left(x_{n}\right)-\tilde{y}_{n}\right|, n=1, \ldots, N, are presented in the following table. They are also plotted in Figure 1(b). We observe a very good agreement when we compare the numerical with the analytical solution.
Figure 1: (a) The exact solution (continuous line) and the numerical solution (circle marker). (b) The values of the errors
References
[1] Agratini O., Chiorean I., Coman Gh., Trîmbiţas R., Numerical Analysis and Approximation Theory, Vol. III, Presa Universitară Clujeană, 2002 (in Romanian).
[2] Butcher J.C., Numerical Methods for Ordinary Differential Equations, John Willey&Sons, 2003.
[3] Cătinaş E., Methods of Newton and Newton-Krylov Type, Risoprint, Cluj-Napoca, 2007.
[4] Crouzeix M., Mignot A.L., Analyse numérique des equations différentielles, Masson, Paris, 1989.
[5] Lambert J.D., Numerical Methods for Ordinary Differential Systems-The Initial Value Problem, John Wiley&Sons, 1990.
[6] Matheij R., Molennar J., Ordinary Differential Equations in Theory and Practice, SIAM, Philadelphia, 2002.
[7] Păvăloiu I., On the monotonicity of the sequences of approximations obtained by Steffensen's method, Mathematica, 35(58), no. 1, pp. 95-101, 1993.
[8] Păvăloiu I. Bilateral approximation for the solution of scalar equations, Rev. Anal. Numér. Théor. Approx. 23, no. 1, pp. 95-101, 1994.
[9] Păvăloiu I., Approximation of the roots of equations by Aitken-Steffensen-type monotonic sequences, Calcolo, 32, pp. 69-82, 1995.
[10] Păvăloiu I., Aitken-Steffensen type methods for nonsmooth functions(III), Rev. Anal. Numér. Théor. Approx. 32, no. 1, pp. 73-77, 2003.