Solution of a polylocal problem with a pseudospectral method

Abstract

Consider the problem:

$$y”(x) + f(x, y) = 0, \ \ x \in [0, 1], y(a) = \alpha, y(b) = \beta, a, b ∈ (0, 1),$$

which is not a two-point boundary value problem since \(a, b ∈ (0, 1)\).

It is possible to solve this problem by dividing it into the three problems: a two-point boundary value problem (BVP) on \([a, b]\) and two initial value problems (IVP), on \([0, a]\) and \([b, 1]\).

The aim of this work is to present a solution procedure based on pseudospectral collocation with Chebyshev extreme points combined with a Runge-Kutta method. Finally, some numerical examples are given.

Authors

Daniel N. Pop
(Sibiu)

Radu T. Trimbitas
(Cluj-Napoca)

Ion Pavaloiu
(Tiberiu Popoviciu Institute of Numerical Analysis)

Keywords

spectral methods; boundary-value problem; collocation; centrosymmetric matrix

PDF

Scanned paper.

Latex version of the paper.

Cite this paper as:

D.N. Pop, R.T. Trimbitas, I. Pavaloiu, Solution of a polylocal problem with a pseudospectral method, Annals of the Tiberiu Popoviciu Seminar of Functional Equations, Approximation and Convexity, 8 (2010), pp. 53–63.

About this paper

Journal

Annals of the Tiberiu Popoviciu Seminar of Functional Equations, Approximation and Convexity

Publisher Name
DOI

Not available yet.

Print ISSN

1584-4536

Online ISSN

Not available yet.

References

?

Paper (preprint) in HTML form

2010-Pop-Trimbitas-Pavaloiu-Annals-of-TP-Solution-of-a-polylocal

1 Introduction

Consider the problem (PVP):
(1.1) y ( x ) + f ( x , y ) = 0 , x [ 0 , 1 ] (1.2) y ( a ) = α (1.3) y ( b ) = β , a , b ( 0 , 1 ) , a < b (1.1) y ( x ) + f ( x , y ) = 0 , x [ 0 , 1 ] (1.2) y ( a ) = α (1.3) y ( b ) = β , a , b ( 0 , 1 ) , a < b {:[(1.1)y^('')(x)+f(x","y)=0","x in[0","1]],[(1.2)y(a)=alpha],[(1.3)y(b)=beta","a","b in(0","1)","a < b]:}\begin{align*} y^{\prime \prime}(x)+f(x, y)=0, & x \in[0,1] \tag{1.1}\\ y(a)=\alpha & \tag{1.2}\\ y(b)=\beta, & a, b \in(0,1), a<b \tag{1.3} \end{align*}(1.1)y(x)+f(x,y)=0,x[0,1](1.2)y(a)=α(1.3)y(b)=β,a,b(0,1),a<b
where a , b , α , β R a , b , α , β R a,b,alpha,beta inRa, b, \alpha, \beta \in \mathbb{R}a,b,α,βR. This is not a two-point boundary value problem, since a , b ( 0 , 1 ) a , b ( 0 , 1 ) a,b in(0,1)a, b \in(0,1)a,b(0,1).
We try to solve the problem using a pseudospectral collocation method with Chebyshev extrema combined with a Runge Kutta method.
Then, we compare them in terms of error and cost.
Our choice to use this method is based on the following reasons :
  1. We write the code using the functions in MATLAB dmsuite [1].
  2. The accuracy of spectral method is superior to finite elements methods (FEM) and finite difference methods (FDM) (the rate of convergence associated with problems with smooth solutions are O ( exp ( c N ) ) O ( exp ( c N ) ) O(exp(-cN))O(\exp (-c N))O(exp(cN)) or O ( exp ( c N ) ) O ( exp ( c N ) ) O(exp(csqrtN))O(\exp (c \sqrt{N}))O(exp(cN)), where N N NNN is the number of degrees of freedom in the expansion).
  3. There exists elegant theoretical results on the convergence of collocation method (see, for example, [2]).
As drawbacks, we mention:
  1. the matrices involved are full, not sparse;
  2. the condition number is larger than those of FEM and FDM;
  3. symmetric matrices are replaced by nonsymmetric matrices.
We also consider the BVP :
(1.4) y ( x ) + f ( x , y ) = 0 , x [ c , d ] (1.5) y ( c ) = α (1.6) y ( d ) = β , (1.4) y ( x ) + f ( x , y ) = 0 , x [ c , d ] (1.5) y ( c ) = α (1.6) y ( d ) = β , {:[(1.4)y^('')(x)+f(x","y)=0","quad x in[c","d]],[(1.5)y(c)=alpha],[(1.6)y(d)=beta","]:}\begin{align*} y^{\prime \prime}(x)+f(x, y) & =0, \quad x \in[c, d] \tag{1.4}\\ y(c) & =\alpha \tag{1.5}\\ y(d) & =\beta, \tag{1.6} \end{align*}(1.4)y(x)+f(x,y)=0,x[c,d](1.5)y(c)=α(1.6)y(d)=β,
To apply the collocation theory we need to have an isolated solution y ( x ) y ( x ) y(x)y(x)y(x) of the problem ( 1.4 ) + ( 1.5 ) + ( 1.6 ) ( 1.4 ) + ( 1.5 ) + ( 1.6 ) (1.4)+(1.5)+(1.6)(1.4)+(1.5)+(1.6)(1.4)+(1.5)+(1.6), and this occurs if the above linearized problem for y ( x ) y ( x ) y(x)y(x)y(x) is uniquely solvable. R.D Russel and L.F.Shampine [3] study the existence and the uniqueness of the isolated solution.
Theorem 1.1 [3] Suppose that y ( x ) y ( x ) y(x)y(x)y(x) is a solution of the boundary value problem ( 1.4 ) + ( 1.5 ) + ( 1.6 ) ( 1.4 ) + ( 1.5 ) + ( 1.6 ) (1.4)+(1.5)+(1.6)(1.4)+(1.5)+(1.6)(1.4)+(1.5)+(1.6), that the functions :
f ( x , y ) and f ( x , y ) y f ( x , y )  and  f ( x , y ) y f(x,y)" and "(del f(x,y))/(del y)f(x, y) \text { and } \frac{\partial f(x, y)}{\partial y}f(x,y) and f(x,y)y
are defined and continuous for a x b a x b a <= x <= ba \leq x \leq baxb, and | y k y ( k ) ( x ) | δ , k = 0 , 1 y k y ( k ) ( x ) δ , k = 0 , 1 |y_(k)-y^((k))(x)| <= delta,k=0,1\left|y_{k}-y^{(k)}(x)\right| \leq \delta, k=0,1|yky(k)(x)|δ,k=0,1; δ > 0 δ > 0 delta > 0\delta>0δ>0, and the homogeneous equations y ( x ) = 0 y ( x ) = 0 y^('')(x)=0y^{\prime \prime}(x)=0y(x)=0 subject to the homogeneous boundary conditions (1.5)+(1.6) has only the trivial solution. If the linear homogeneous equations:
y ( x ) + k = 0 1 f ( x , y ) y k y ( k ) ( x ) = 0 y ( x ) + k = 0 1 f ( x , y ) y k y ( k ) ( x ) = 0 y^('')(x)+sum_(k=0)^(1)(del f(x,y))/(dely_(k))y^((k))(x)=0y^{\prime \prime}(x)+\sum_{k=0}^{1} \frac{\partial f(x, y)}{\partial y_{k}} y^{(k)}(x)=0y(x)+k=01f(x,y)yky(k)(x)=0
has only trivial solution, then this is sufficient to guarantee that there exists a σ > 0 σ > 0 sigma > 0\sigma>0σ>0 such y ( x ) y ( x ) y(x)y(x)y(x) is the unique solution of problem BVP in the sphere:
w u σ w u σ ||w-u^('')|| <= sigma\left\|w-u^{\prime \prime}\right\| \leq \sigmawuσ
For the existence and uniqueness of an IVP, we recall the following:
Theorem 1.2 [4, pp. 112-113] Suppose that D = { a x b , < y < } D = { a x b , < y < } D={a <= x <= b,-oo < y < oo}D=\{a \leq x \leq b,-\infty<y< \infty\}D={axb,<y<} and f ( x , y ) f ( x , y ) f(x,y)f(x, y)f(x,y) is continuous on D D DDD. If f f fff satisfies a Lipschitz condition on D D DDD in the variable y y yyy, then the initial value problem (IVP)
(1.7) { y = z z = f ( x , y ) , a x b , y ( a ) = α , y ( a ) = ς (1.7) y = z z = f ( x , y ) , a x b , y ( a ) = α , y ( a ) = ς {:(1.7){[y^(')=z],[z^(')=-f(x","y)","a <= x <= b","y(a)=alpha","y^(')(a)=ς]:}:}\left\{\begin{array}{c} y^{\prime}=z \tag{1.7}\\ z^{\prime}=-f(x, y), a \leq x \leq b, y(a)=\alpha, y^{\prime}(a)=\varsigma \end{array}\right.(1.7){y=zz=f(x,y),axb,y(a)=α,y(a)=ς
has a unique solution y ( x ) y ( x ) y(x)y(x)y(x) for a x b a x b a <= x <= ba \leq x \leq baxb.
If the problem BVP has the unique solution, the requirement y ( x ) C 2 [ 0 , 1 ] y ( x ) C 2 [ 0 , 1 ] y(x)inC^(2)[0,1]y(x) \in C^{2}[0,1]y(x)C2[0,1] ensure the existence and the uniqueness of the solution of PVP.

2 The description of the method

Our second method consists into decomposition of the the problem ( 1.1 ) + ( 1.2 ) + ( 1.3 ) ( 1.1 ) + ( 1.2 ) + ( 1.3 ) (1.1)+(1.2)+(1.3)(1.1)+(1.2)+(1.3)(1.1)+(1.2)+(1.3) into three problems:
  1. A BVP on [ a , b ] [ a , b ] [a,b][a, b][a,b]
  2. Two IVPs on [ 0 , a ] [ 0 , a ] [0,a][0, a][0,a] and [ b , 1 ] [ b , 1 ] [b,1][b, 1][b,1].
Also we suppose that the problem (1.4)+(1.5)+(1.6) satisfy Theorem 1.1, which assures a sufficient condition to guarantee that there exists a σ > 0 σ > 0 sigma > 0\sigma>0σ>0 such y ( x ) y ( x ) y(x)y(x)y(x) is the unique solution of problem BVP in the sphere:
w u ( " ) σ w u ( " ) σ ||w-u^(("))|| <= sigma\left\|w-u^{(")}\right\| \leq \sigmawu(")σ
For 0 x a 0 x a 0 <= x <= a0 \leq x \leq a0xa and b x 1 b x 1 b <= x <= 1b \leq x \leq 1bx1 we have two initial value problems.
Due to conditions in Theorems 1.1 and 1.2, the problem (1.1)+ ( 1.2 ) + ( 1.3 ) ( 1.2 ) + ( 1.3 ) (1.2)+(1.3)(1.2)+(1.3)(1.2)+(1.3) has a unique solution.
Consider the grid
(2.1) Δ : a = x q < < x 1 < c = x 0 < x 1 < < x N = d < x N + 1 < < x N + p = b . (2.1) Δ : a = x q < < x 1 < c = x 0 < x 1 < < x N = d < x N + 1 < < x N + p = b . {:[(2.1)Delta:a=x_(-q) < cdots < x_(-1) < c=x_(0) < x_(1) < cdots < x_(N)=d < ],[x_(N+1) < cdots < x_(N+p)=b.]:}\begin{align*} \Delta & : a=x_{-q}<\cdots<x_{-1}<c=x_{0}<x_{1}<\cdots<x_{N}=d< \tag{2.1}\\ & x_{N+1}<\cdots<x_{N+p}=b . \end{align*}(2.1)Δ:a=xq<<x1<c=x0<x1<<xN=d<xN+1<<xN+p=b.
We shall use a pseudospectral method for the solution of ( 1.4 ) + ( 1.5 ) + ( 1.4 ) + ( 1.5 ) + (1.4)+(1.5)+(1.4)+(1.5)+(1.4)+(1.5)+ (1.6) and a Runge-Kutta method for the two IVPs.
The solution of the two-point boundary value problems follow the ideas in [6] Let y ( x ) y ( x ) y(x)y(x)y(x) be the solution of (1.4)+(1.5)+(1.6). Considering the Lagrange basis ( k ) k (ℓ_(k))\left(\ell_{k}\right)(k), we have
(2.2) y ( x ) = k = 0 N k ( x ) y ( x k ) + ( R N y ) ( x ) , x [ c , d ] (2.2) y ( x ) = k = 0 N k ( x ) y x k + R N y ( x ) , x [ c , d ] {:(2.2)y(x)=sum_(k=0)^(N)ℓ_(k)(x)y(x_(k))+(R_(N)y)(x)","quad x in[c","d]:}\begin{equation*} y(x)=\sum_{k=0}^{N} \ell_{k}(x) y\left(x_{k}\right)+\left(R_{N} y\right)(x), \quad x \in[c, d] \tag{2.2} \end{equation*}(2.2)y(x)=k=0Nk(x)y(xk)+(RNy)(x),x[c,d]
where
( R N y ) ( x ) = y ( N + 1 ) ( ξ ) ( N + 1 ) ! ( x x 0 ) ( x x N ) R N y ( x ) = y ( N + 1 ) ( ξ ) ( N + 1 ) ! x x 0 x x N (R_(N)y)(x)=(y^((N+1))(xi))/((N+1)!)(x-x_(0))cdots(x-x_(N))\left(R_{N} y\right)(x)=\frac{y^{(N+1)}(\xi)}{(N+1)!}\left(x-x_{0}\right) \cdots\left(x-x_{N}\right)(RNy)(x)=y(N+1)(ξ)(N+1)!(xx0)(xxN)
is the remainder of Lagrange interpolation and
k ( x ) = j = 0 j k n x x j x k x j k ( x ) = j = 0 j k n x x j x k x j ℓ_(k)(x)=prod_({:[j=0],[j!=k]:})^(n)(x-x_(j))/(x_(k)-x_(j))\ell_{k}(x)=\prod_{\substack{j=0 \\ j \neq k}}^{n} \frac{x-x_{j}}{x_{k}-x_{j}}k(x)=j=0jknxxjxkxj
Since y ( x ) y ( x ) y(x)y(x)y(x) fulfills the differential equation (1.4), we obtain
(2.3) k = 0 N k ( x i ) y ( x k ) + ( R N y ) ( x i ) = f ( x i , y ( x i ) ) , i = 1 , , N 1 (2.3) k = 0 N k x i y x k + R N y x i = f x i , y x i , i = 1 , , N 1 {:(2.3)sum_(k=0)^(N)ℓ_(k)^('')(x_(i))y(x_(k))+(R_(N)y)^('')(x_(i))=-f(x_(i),y(x_(i)))","quad i=1","dots","N-1:}\begin{equation*} \sum_{k=0}^{N} \ell_{k}^{\prime \prime}\left(x_{i}\right) y\left(x_{k}\right)+\left(R_{N} y\right)^{\prime \prime}\left(x_{i}\right)=-f\left(x_{i}, y\left(x_{i}\right)\right), \quad i=1, \ldots, N-1 \tag{2.3} \end{equation*}(2.3)k=0Nk(xi)y(xk)+(RNy)(xi)=f(xi,y(xi)),i=1,,N1
Setting y ( x k ) = y k y x k = y k y(x_(k))=y_(k)y\left(x_{k}\right)=y_{k}y(xk)=yk and ignoring the rest, one obtains the nonlinear system
(2.4) k = 0 N k ( x i ) y k = f ( x i , y ( x i ) ) , i = 1 , , N 1 (2.4) k = 0 N k x i y k = f x i , y x i , i = 1 , , N 1 {:(2.4)sum_(k=0)^(N)ℓ_(k)^('')(x_(i))y_(k)=-f(x_(i),y(x_(i)))","quad i=1","dots","N-1:}\begin{equation*} \sum_{k=0}^{N} \ell_{k}^{\prime \prime}\left(x_{i}\right) y_{k}=-f\left(x_{i}, y\left(x_{i}\right)\right), \quad i=1, \ldots, N-1 \tag{2.4} \end{equation*}(2.4)k=0Nk(xi)yk=f(xi,y(xi)),i=1,,N1
with unknowns y k , k = 1 , , N 1 y k , k = 1 , , N 1 y_(k),k=1,dots,N-1y_{k}, k=1, \ldots, N-1yk,k=1,,N1; here y 0 = y ( x 0 ) = α y 0 = y x 0 = α y_(0)=y(x_(0))=alphay_{0}=y\left(x_{0}\right)=\alphay0=y(x0)=α and y N = y ( x N ) = β y N = y x N = β y_(N)=y(x_(N))=betay_{N}= y\left(x_{N}\right)=\betayN=y(xN)=β. The approximate solution (that is the collocation polynomial for ( 1.4 ) + ( 1.5 ) + ( 1.6 ) ) ( 1.4 ) + ( 1.5 ) + ( 1.6 ) ) (1.4)+(1.5)+(1.6))(1.4)+(1.5)+(1.6))(1.4)+(1.5)+(1.6)) is the Lagrange interpolation polynomial at nodes { x k } x k {x_(k)}\left\{x_{k}\right\}{xk}
(2.5) y N ( x ) = k = 0 N k ( x ) y k (2.5) y N ( x ) = k = 0 N k ( x ) y k {:(2.5)y_(N)(x)=sum_(k=0)^(N)ℓ_(k)(x)y_(k):}\begin{equation*} y_{N}(x)=\sum_{k=0}^{N} \ell_{k}(x) y_{k} \tag{2.5} \end{equation*}(2.5)yN(x)=k=0Nk(x)yk
The nonlinear system (2.4) can be rewritten as
A Y N = F ( Y N ) + b N A Y N = F Y N + b N AY_(N)=F(Y_(N))+b_(N)A Y_{N}=F\left(Y_{N}\right)+b_{N}AYN=F(YN)+bN
where
A = [ a i k ] , a i k = k ( x i ) , k , i = 1 , , N 1 F ( Y N ) = [ f ( x 1 , y 1 ) f ( x N 1 , y N 1 ) ] , b N = [ α 0 ( x 1 ) β N ( x 1 ) α 0 ( x N 1 ) β N ( x N 1 ) ] . A = a i k , a i k = k x i , k , i = 1 , , N 1 F Y N = f x 1 , y 1 f x N 1 , y N 1 , b N = α 0 x 1 β N x 1 α 0 x N 1 β N x N 1 . {:[A=[a_(ik)]","quada_(ik)=ℓ_(k)^('')(x_(i))","k","i=1","dots","N-1],[F(Y_(N))=[[-f(x_(1),y_(1))],[vdots],[-f(x_(N-1),y_(N-1))]]","b_(N)=[[-alphaℓ_(0)^('')(x_(1))-betaℓ_(N)^('')(x_(1))],[vdots],[-alphaℓ_(0)^('')(x_(N-1))-betaℓ_(N)^('')(x_(N-1))]].]:}\begin{aligned} A & =\left[a_{i k}\right], \quad a_{i k}=\ell_{k}^{\prime \prime}\left(x_{i}\right), k, i=1, \ldots, N-1 \\ F\left(Y_{N}\right) & =\left[\begin{array}{c} -f\left(x_{1}, y_{1}\right) \\ \vdots \\ -f\left(x_{N-1}, y_{N-1}\right) \end{array}\right], b_{N}=\left[\begin{array}{c} -\alpha \ell_{0}^{\prime \prime}\left(x_{1}\right)-\beta \ell_{N}^{\prime \prime}\left(x_{1}\right) \\ \vdots \\ -\alpha \ell_{0}^{\prime \prime}\left(x_{N-1}\right)-\beta \ell_{N}^{\prime \prime}\left(x_{N-1}\right) \end{array}\right] . \end{aligned}A=[aik],aik=k(xi),k,i=1,,N1F(YN)=[f(x1,y1)f(xN1,yN1)],bN=[α0(x1)βN(x1)α0(xN1)βN(xN1)].
If the nodes ( x k ) , k = 0 , , N x k , k = 0 , , N (x_(k)),k=0,dots,N\left(x_{k}\right), k=0, \ldots, N(xk),k=0,,N, are symmetric with respect of ( c + d ) / 2 ( c + d ) / 2 (c+d)//2(c+d) / 2(c+d)/2, then A A AAA is centrosymmetric (see [6] for the proof), so nonsingular. We recall the definition [7]: an m × m m × m m xx mm \times mm×m matrix A A AAA is centrosymmetric if
a i , j = a m i + 1 , m j + 1 , i , j = 1 , , m a i , j = a m i + 1 , m j + 1 , i , j = 1 , , m a_(i,j)=a_(m-i+1,m-j+1),quad i,j=1,dots,ma_{i, j}=a_{m-i+1, m-j+1}, \quad i, j=1, \ldots, mai,j=ami+1,mj+1,i,j=1,,m
Examples of such nodes are given by
x i = c + d c N i or x i = ( d c ) cos π i N + d + c 2 , i = 1 , , N x i = c + d c N i  or  x i = ( d c ) cos π i N + d + c 2 , i = 1 , , N x_(i)=c+(d-c)/(N)i quad" or "quadx_(i)=((d-c)cos((pi i)/(N))+d+c)/(2),quad i=1,dots,Nx_{i}=c+\frac{d-c}{N} i \quad \text { or } \quad x_{i}=\frac{(d-c) \cos \frac{\pi i}{N}+d+c}{2}, \quad i=1, \ldots, Nxi=c+dcNi or xi=(dc)cosπiN+d+c2,i=1,,N
i.e. the equispaced and the Chebyshev-Lobatto nodes.
We introduce
(2.6) G ( Y ) = A 1 F ( Y ) + A 1 b N (2.6) G ( Y ) = A 1 F ( Y ) + A 1 b N {:(2.6)G(Y)=A^(-1)F(Y)+A^(-1)b_(N):}\begin{equation*} G(Y)=A^{-1} F(Y)+A^{-1} b_{N} \tag{2.6} \end{equation*}(2.6)G(Y)=A1F(Y)+A1bN
To solve numerically (1.1) + ( 1.2 ) + ( 1.3 ) + ( 1.2 ) + ( 1.3 ) +(1.2)+(1.3)+(1.2)+(1.3)+(1.2)+(1.3) on Δ Δ Delta\DeltaΔ given by (2.1), we apply pseodospectral collocation at points in [ c , d ] [ c , d ] [c,d][c, d][c,d] and then a Runge-Kutta to the other points. To apply the Runge-Kutta method for the solution of two IVP on [ a , c ] [ a , c ] [a,c][a, c][a,c] and [ d , b ] [ d , b ] [d,b][d, b][d,b] we need the derivatives y ( c ) y ( c ) y^(')(c)y^{\prime}(c)y(c) and y ( d ) y ( d ) y^(')(d)y^{\prime}(d)y(d). This can be computed by deriving the formula (2.5).
In [5] the authors prove the following theorems.
Theorem 2.1 If f f fff verifies a Lipschitz condition with respect to the second variable
| F ( x , u 1 ) F ( x , u 2 ) | L | u 1 u 2 | F x , u 1 F x , u 2 L u 1 u 2 |F(x,u_(1))-F(x,u_(2))| <= L|u_(1)-u_(2)|\left|F\left(x, u_{1}\right)-F\left(x, u_{2}\right)\right| \leq L\left|u_{1}-u_{2}\right||F(x,u1)F(x,u2)|L|u1u2|
and A 1 L < 1 A 1 L < 1 ||A^(-1)||L < 1\left\|A^{-1}\right\| L<1A1L<1, then the system (2.4) has a unique solution which can be calculated by the successive approximation method
(2.7) Y ( n + 1 ) = G ( Y ( n ) ) , n N (2.7) Y ( n + 1 ) = G Y ( n ) , n N {:(2.7)Y^((n+1))=G(Y^((n)))","quad n inN^(**):}\begin{equation*} Y^{(n+1)}=G\left(Y^{(n)}\right), \quad n \in \mathbb{N}^{*} \tag{2.7} \end{equation*}(2.7)Y(n+1)=G(Y(n)),nN
with Y ( 0 ) Y ( 0 ) Y^((0))Y^{(0)}Y(0) fixed and G G GGG given by (2.6).
Theorem 2.2 If Y = [ y ( x 1 ) , , y ( x N 1 ) ] T Y = y x 1 , , y x N 1 T Y=[y(x_(1)),dots,y(x_(N-1))]^(T)Y=\left[y\left(x_{1}\right), \ldots, y\left(x_{N-1}\right)\right]^{T}Y=[y(x1),,y(xN1)]T where y ( x ) y ( x ) y(x)y(x)y(x) is the solution of problem ( 1.4 ) + ( 1.5 ) + ( 1.6 ) ; Y N = [ y 1 , , y N 1 ] T ( 1.4 ) + ( 1.5 ) + ( 1.6 ) ; Y N = y 1 , , y N 1 T (1.4)+(1.5)+(1.6);Y_(N)=[y_(1),dots,y_(N-1)]^(T)(1.4)+(1.5)+(1.6) ; Y_{N}=\left[y_{1}, \ldots, y_{N-1}\right]^{T}(1.4)+(1.5)+(1.6);YN=[y1,,yN1]T where y i y i y_(i)y_{i}yi are the values of approximated solution at x ˙ i x ˙ i x^(˙)_(i)\dot{x}_{i}x˙i computed by (2.4) and R = R = R=R=R=
[ ( R N y ) ( x 1 ) , , ( R N y ) ( x N 1 ) ] T R N y x 1 , , R N y x N 1 T [-(R_(N)y)^('')(x_(1)),dots,-(R_(N)y)^('')(x_(N-1))]^(T)\left[-\left(R_{N} y\right)^{\prime \prime}\left(x_{1}\right), \ldots,-\left(R_{N} y\right)^{\prime \prime}\left(x_{N-1}\right)\right]^{T}[(RNy)(x1),,(RNy)(xN1)]T then for the error Y Y N Y Y N ||Y-Y_(N)||\left\|Y-Y_{N}\right\|YYN it holds
(2.8) Y Y N A 1 R 1 A 1 L . (2.8) Y Y N A 1 R 1 A 1 L . {:(2.8)||Y-Y_(N)|| <= (||A^(-1)||||R||)/(1-||A^(-1)||L).:}\begin{equation*} \left\|Y-Y_{N}\right\| \leq \frac{\left\|A^{-1}\right\|\|R\|}{1-\left\|A^{-1}\right\| L} . \tag{2.8} \end{equation*}(2.8)YYNA1R1A1L.
Combining the results of Theorems 2.1 and 2.2 with the stability and convergence of Runge-Kutta methods we have:
Theorem 2.3 If
A 1 R 1 A 1 L = O ( h k ) A 1 R 1 A 1 L = O h k (||A^(-1)||||R||)/(1-||A^(-1)||L)=O(h^(k))\frac{\left\|A^{-1}\right\|\|R\|}{1-\left\|A^{-1}\right\| L}=O\left(h^{k}\right)A1R1A1L=O(hk)
and
| y N ( c ) y ( c ) | = O ( h k ) | y N ( d ) y ( d ) | = O ( h k ) y N ( c ) y ( c ) = O h k y N ( d ) y ( d ) = O h k {:[|y_(N)(c)-y^(')(c)|=O(h^(k))],[|y_(N)(d)-y^(')(d)|=O(h^(k))]:}\begin{aligned} \left|y_{N}(c)-y^{\prime}(c)\right| & =O\left(h^{k}\right) \\ \left|y_{N}(d)-y^{\prime}(d)\right| & =O\left(h^{k}\right) \end{aligned}|yN(c)y(c)|=O(hk)|yN(d)y(d)|=O(hk)
then for each point x i x i x_(i)x_{i}xi in Δ , i = q , , N + p , | y ( x i ) y i | = O ( h k ) Δ , i = q , , N + p , y x i y i = O h k Delta,i=-q,dots,N+p,|y(x_(i))-y_(i)|=O(h^(k))\Delta, i=-q, \ldots, N+p,\left|y\left(x_{i}\right)-y_{i}\right|=O\left(h^{k}\right)Δ,i=q,,N+p,|y(xi)yi|=O(hk).
Proof. The condition A 1 L < 1 A 1 L < 1 ||A^(-1)||L < 1\left\|A^{-1}\right\| L<1A1L<1 assures us that G G GGG is a contraction. From Banach's fixpoint theorem it follows that ( Y ( n ) ) Y ( n ) (Y^((n)))\left(Y^{(n)}\right)(Y(n)) given by (2.7) is convergent to the exact solution Y ¯ Y ¯ bar(Y)\bar{Y}Y¯ of (2.4) and the following estimation holds
Y ¯ Y ( n ) ( A 1 L ) n 1 A 1 L Y ( 1 ) Y ( 0 ) Y ¯ Y ( n ) A 1 L n 1 A 1 L Y ( 1 ) Y ( 0 ) ||( bar(Y))-Y^((n))|| <= ((||A^(-1)||L)^(n))/(1-||A^(-1)||L)||Y^((1))-Y^((0))||\left\|\bar{Y}-Y^{(n)}\right\| \leq \frac{\left(\left\|A^{-1}\right\| L\right)^{n}}{1-\left\|A^{-1}\right\| L}\left\|Y^{(1)}-Y^{(0)}\right\|Y¯Y(n)(A1L)n1A1LY(1)Y(0)
If the accuracy of the collocation method for the BVP is O ( h k ) O h k O(h^(k))O\left(h^{k}\right)O(hk) (that is, the approximate solution y y yyy and its derivative y y y^(')y^{\prime}y at c c ccc and d d ddd are within this accuracy limit), and if the Runge-Kutta method is stable and has the order k k kkk, then the final solution has the same accuracy. The stability and convergence of Runge-Kutta method are guaranteed by Theorems 5.3.1, page 285 and 5.3.2, page 288 in [8]. \square

3 Numerical examples

Our combined method was implemented in MATLAB, using the functions cebdif, cebint and cebdifft contained in dmsuite and described
in [1]. We chose { x k , k = 0 , N } x k , k = 0 , N {x_(k),k=0,N}\left\{x_{k}, k=0, N\right\}{xk,k=0,N} as extreme Chebysev points and the other of Δ Δ Delta\DeltaΔ were computed using the MATLAB solver ode45. Since the successive approximation method is slow, we solve the nonlinear system (2.4) by Newton's method. The MATLAB function solvepolylocalceb solve the nonlinear system and call the Runge-Kutta solver. The derivatives at c c ccc and d d ddd were computed by calling cebdifft.
Let us consider two nonlinear examples.
Example 3.1 [9, page 491]Bratu's equation for λ = 1 λ = 1 lambda=1\lambda=1λ=1 :
y + e y = 0 , x ( 0 , 1 ) y ( 0.2 ) = y ( 0.8 ) = 0.08918993462883 . y + e y = 0 , x ( 0 , 1 ) y ( 0.2 ) = y ( 0.8 ) = 0.08918993462883 . {:[y^('')+e^(y)=0","quad x in(0","1)],[y(0.2)=y(0.8)=0.08918993462883.]:}\begin{aligned} y^{\prime \prime}+e^{y} & =0, \quad x \in(0,1) \\ y(0.2) & =y(0.8)=0.08918993462883 . \end{aligned}y+ey=0,x(0,1)y(0.2)=y(0.8)=0.08918993462883.
We took N = 128 N = 128 N=128N=128N=128 and the tolerance ε = 10 10 ε = 10 10 epsi=10^(-10)\varepsilon=10^{-10}ε=1010. The starting value is y ( 0 ) ( x ) = x ( 1 x ) 4 y ( 0 ) ( x ) = x ( 1 x ) 4 y^((0))(x)=(x(1-x))/(4)y^{(0)}(x)=\frac{x(1-x)}{4}y(0)(x)=x(1x)4. The solution of nonlinear system is obtained after 4 iterations. The graph of the solution is given in Figure 1.
Example 3.2 Consider the problem
(3.1) u + u p = 0 ; 0 < x < 1 u ( 0 ) = 0 , u ( 1 ) = 0 (3.1) u + u p = 0 ; 0 < x < 1 u ( 0 ) = 0 , u ( 1 ) = 0 {:[(3.1)u^('')+u^(p)=0;quad0 < x < 1],[u(0)=0","quad u(1)=0]:}\begin{gather*} u^{\prime \prime}+u^{p}=0 ; \quad 0<x<1 \tag{3.1}\\ u(0)=0, \quad u(1)=0 \end{gather*}(3.1)u+up=0;0<x<1u(0)=0,u(1)=0
The positive solution of this problem represents the average temperature in a reaction-diffusion process. In [10], the authors proved that, for p = 3 p = 3 p=3p=3p=3, the problem (3.1) has a unique positive solution. We consider here the "variant"
u + u 3 = 0 ; 0 < x < 1 u ( 0.2 ) = u ( 0.8 ) = 1.929990320692795 u + u 3 = 0 ; 0 < x < 1 u ( 0.2 ) = u ( 0.8 ) = 1.929990320692795 {:[u^('')+u^(3)=0;quad0 < x < 1],[u(0.2)=u(0.8)=1.929990320692795]:}\begin{aligned} u^{\prime \prime}+u^{3} & =0 ; \quad 0<x<1 \\ u(0.2) & =u(0.8)=1.929990320692795 \end{aligned}u+u3=0;0<x<1u(0.2)=u(0.8)=1.929990320692795
In this example, N = 128 , ε = 10 8 , y ( 0 ) = 6 π 2 x ( 1 x ) N = 128 , ε = 10 8 , y ( 0 ) = 6 π 2 x ( 1 x ) N=128,epsi=10^(-8),y^((0))=(6pi)/(sqrt2)x(1-x)N=128, \varepsilon=10^{-8}, y^{(0)}=\frac{6 \pi}{\sqrt{2}} x(1-x)N=128,ε=108,y(0)=6π2x(1x). The desired tolerance was obtained after 8 iterations. Figure 2 shows the graph of the solution.
Figure 1: The solution to Bratu's problem for λ = 1 λ = 1 lambda=1\lambda=1λ=1.
Figure 2: The positive solution to average temperature in a reactiondiffusion process.

References

[1] J. A. C. Weideman, S. C. Reddy, A MATLAB Differentiation Matrix Suite, ACM Transactions on Mathematical Software, 26(4), 2000, pp. 465-519.
[2] W. Heinrichs, Strong convergence estimates for pseudospectral methods, Applications of Mathematics, 37(6), 1992, pp. 401-417.
[3] R.D. Russel, L.F.Shampine, A Collocation Method for boundary Value Problems, Numer. Math. 19, pp. 1-28, Springer Verlag, 1972.
[4] Henrici, P., Discrete variable methods in ordinary differential equations, John Willey, New York, pp. 345-384, 1962.
[5] F. Costabile, A. Napoli, A Collocation Method for Global Approximation of General Second Order BVPs, Computing Letters, 3(1), 2007, pp. 23-34.
[6] F. A. Costabile, F. Dell'Accio, On the nonsingularity of a special class of centrosymmetric matrices arising in spectral methods in BVPs, Applied Mathematics and Computation, 206, 2008, pp. 991-993.
[7] A. L. Andrew, Centrosymmetric Matrices, SIAM review, 40(3), 1998, pp. 697-698.
[8] W. Gautschi, Numerical Analysis. An Introduction, Birkhäuser, Boston, Basel, Berlin, 1997.
[9] U. Ascher, R.M. Mattheij, and R.D. Russel, Numerical Solution of Boundary Value Problems for Ordinary Differential Equations, SIAM, 1997.
[10] C. I. Gheorghiu, D. Trif, The Numerical Approximation to Positive Solution for some Reaction-Diffusion Problems, Pu.M.A., 11, 2000, pp. 243-253.
2010

Related Posts