$$y”(x) + f(x, y) = 0, \ \ x \in [0, 1], y(a) = \alpha, y(b) = \beta, a, b ∈ (0, 1),$$
which is not a two-point boundary value problem since \(a, b ∈ (0, 1)\).
It is possible to solve this problem by dividing it into the three problems: a two-point boundary value problem (BVP) on \([a, b]\) and two initial value problems (IVP), on \([0, a]\) and \([b, 1]\).
The aim of this work is to present a solution procedure based on pseudospectral collocation with Chebyshev extreme points combined with a Runge-Kutta method. Finally, some numerical examples are given.
Authors
Daniel N. Pop
(Sibiu)
Radu T. Trimbitas
(Cluj-Napoca)
Ion Pavaloiu (Tiberiu Popoviciu Institute of Numerical Analysis)
D.N. Pop, R.T. Trimbitas, I. Pavaloiu, Solution of a polylocal problem with a pseudospectral method, Annals of the Tiberiu Popoviciu Seminar of Functional Equations, Approximation and Convexity, 8 (2010), pp. 53–63.
About this paper
Journal
Annals of the Tiberiu Popoviciu Seminar of Functional Equations, Approximation and Convexity
{:[(1.1)y^('')(x)+f(x","y)=0","x in[0","1]],[(1.2)y(a)=alpha],[(1.3)y(b)=beta","a","b in(0","1)","a < b]:}\begin{align*}
y^{\prime \prime}(x)+f(x, y)=0, & x \in[0,1] \tag{1.1}\\
y(a)=\alpha & \tag{1.2}\\
y(b)=\beta, & a, b \in(0,1), a<b \tag{1.3}
\end{align*}
where a,b,alpha,beta inRa, b, \alpha, \beta \in \mathbb{R}. This is not a two-point boundary value problem, since a,b in(0,1)a, b \in(0,1).
We try to solve the problem using a pseudospectral collocation method with Chebyshev extrema combined with a Runge Kutta method.
Then, we compare them in terms of error and cost.
Our choice to use this method is based on the following reasons :
We write the code using the functions in MATLAB dmsuite [1].
The accuracy of spectral method is superior to finite elements methods (FEM) and finite difference methods (FDM) (the rate of convergence associated with problems with smooth solutions are O(exp(-cN))O(\exp (-c N)) or O(exp(csqrtN))O(\exp (c \sqrt{N})), where NN is the number of degrees of freedom in the expansion).
There exists elegant theoretical results on the convergence of collocation method (see, for example, [2]).
As drawbacks, we mention:
the matrices involved are full, not sparse;
the condition number is larger than those of FEM and FDM;
symmetric matrices are replaced by nonsymmetric matrices.
To apply the collocation theory we need to have an isolated solution y(x)y(x) of the problem (1.4)+(1.5)+(1.6)(1.4)+(1.5)+(1.6), and this occurs if the above linearized problem for y(x)y(x) is uniquely solvable. R.D Russel and L.F.Shampine [3] study the existence and the uniqueness of the isolated solution.
Theorem 1.1 [3] Suppose that y(x)y(x) is a solution of the boundary value problem (1.4)+(1.5)+(1.6)(1.4)+(1.5)+(1.6), that the functions :
f(x,y)" and "(del f(x,y))/(del y)f(x, y) \text { and } \frac{\partial f(x, y)}{\partial y}
are defined and continuous for a <= x <= ba \leq x \leq b, and |y_(k)-y^((k))(x)| <= delta,k=0,1\left|y_{k}-y^{(k)}(x)\right| \leq \delta, k=0,1; delta > 0\delta>0, and the homogeneous equations y^('')(x)=0y^{\prime \prime}(x)=0 subject to the homogeneous boundary conditions (1.5)+(1.6) has only the trivial solution. If the linear homogeneous equations:
has only trivial solution, then this is sufficient to guarantee that there exists a sigma > 0\sigma>0 such y(x)y(x) is the unique solution of problem BVP in the sphere:
For the existence and uniqueness of an IVP, we recall the following:
Theorem 1.2 [4, pp. 112-113] Suppose that D={a <= x <= b,-oo < y < oo}D=\{a \leq x \leq b,-\infty<y< \infty\} and f(x,y)f(x, y) is continuous on DD. If ff satisfies a Lipschitz condition on DD in the variable yy, then the initial value problem (IVP)
{:(1.7){[y^(')=z],[z^(')=-f(x","y)","a <= x <= b","y(a)=alpha","y^(')(a)=ς]:}:}\left\{\begin{array}{c}
y^{\prime}=z \tag{1.7}\\
z^{\prime}=-f(x, y), a \leq x \leq b, y(a)=\alpha, y^{\prime}(a)=\varsigma
\end{array}\right.
has a unique solution y(x)y(x) for a <= x <= ba \leq x \leq b.
If the problem BVP has the unique solution, the requirement y(x)inC^(2)[0,1]y(x) \in C^{2}[0,1] ensure the existence and the uniqueness of the solution of PVP.
2 The description of the method
Our second method consists into decomposition of the the problem (1.1)+(1.2)+(1.3)(1.1)+(1.2)+(1.3) into three problems:
A BVP on [a,b][a, b]
Two IVPs on [0,a][0, a] and [b,1][b, 1].
Also we suppose that the problem (1.4)+(1.5)+(1.6) satisfy Theorem 1.1, which assures a sufficient condition to guarantee that there exists a sigma > 0\sigma>0 such y(x)y(x) is the unique solution of problem BVP in the sphere:
For 0 <= x <= a0 \leq x \leq a and b <= x <= 1b \leq x \leq 1 we have two initial value problems.
Due to conditions in Theorems 1.1 and 1.2, the problem (1.1)+ (1.2)+(1.3)(1.2)+(1.3) has a unique solution.
We shall use a pseudospectral method for the solution of (1.4)+(1.5)+(1.4)+(1.5)+ (1.6) and a Runge-Kutta method for the two IVPs.
The solution of the two-point boundary value problems follow the ideas in [6] Let y(x)y(x) be the solution of (1.4)+(1.5)+(1.6). Considering the Lagrange basis (ℓ_(k))\left(\ell_{k}\right), we have
{:(2.2)y(x)=sum_(k=0)^(N)ℓ_(k)(x)y(x_(k))+(R_(N)y)(x)","quad x in[c","d]:}\begin{equation*}
y(x)=\sum_{k=0}^{N} \ell_{k}(x) y\left(x_{k}\right)+\left(R_{N} y\right)(x), \quad x \in[c, d] \tag{2.2}
\end{equation*}
with unknowns y_(k),k=1,dots,N-1y_{k}, k=1, \ldots, N-1; here y_(0)=y(x_(0))=alphay_{0}=y\left(x_{0}\right)=\alpha and y_(N)=y(x_(N))=betay_{N}= y\left(x_{N}\right)=\beta. The approximate solution (that is the collocation polynomial for (1.4)+(1.5)+(1.6))(1.4)+(1.5)+(1.6)) is the Lagrange interpolation polynomial at nodes {x_(k)}\left\{x_{k}\right\}
If the nodes (x_(k)),k=0,dots,N\left(x_{k}\right), k=0, \ldots, N, are symmetric with respect of (c+d)//2(c+d) / 2, then AA is centrosymmetric (see [6] for the proof), so nonsingular. We recall the definition [7]: an m xx mm \times m matrix AA is centrosymmetric if
a_(i,j)=a_(m-i+1,m-j+1),quad i,j=1,dots,ma_{i, j}=a_{m-i+1, m-j+1}, \quad i, j=1, \ldots, m
Examples of such nodes are given by
x_(i)=c+(d-c)/(N)i quad" or "quadx_(i)=((d-c)cos((pi i)/(N))+d+c)/(2),quad i=1,dots,Nx_{i}=c+\frac{d-c}{N} i \quad \text { or } \quad x_{i}=\frac{(d-c) \cos \frac{\pi i}{N}+d+c}{2}, \quad i=1, \ldots, N
i.e. the equispaced and the Chebyshev-Lobatto nodes.
We introduce
To solve numerically (1.1) +(1.2)+(1.3)+(1.2)+(1.3) on Delta\Delta given by (2.1), we apply pseodospectral collocation at points in [c,d][c, d] and then a Runge-Kutta to the other points. To apply the Runge-Kutta method for the solution of two IVP on [a,c][a, c] and [d,b][d, b] we need the derivatives y^(')(c)y^{\prime}(c) and y^(')(d)y^{\prime}(d). This can be computed by deriving the formula (2.5).
In [5] the authors prove the following theorems.
Theorem 2.1 If ff verifies a Lipschitz condition with respect to the second variable
and ||A^(-1)||L < 1\left\|A^{-1}\right\| L<1, then the system (2.4) has a unique solution which can be calculated by the successive approximation method
{:(2.7)Y^((n+1))=G(Y^((n)))","quad n inN^(**):}\begin{equation*}
Y^{(n+1)}=G\left(Y^{(n)}\right), \quad n \in \mathbb{N}^{*} \tag{2.7}
\end{equation*}
with Y^((0))Y^{(0)} fixed and GG given by (2.6).
Theorem 2.2 If Y=[y(x_(1)),dots,y(x_(N-1))]^(T)Y=\left[y\left(x_{1}\right), \ldots, y\left(x_{N-1}\right)\right]^{T} where y(x)y(x) is the solution of problem (1.4)+(1.5)+(1.6);Y_(N)=[y_(1),dots,y_(N-1)]^(T)(1.4)+(1.5)+(1.6) ; Y_{N}=\left[y_{1}, \ldots, y_{N-1}\right]^{T} where y_(i)y_{i} are the values of approximated solution at x^(˙)_(i)\dot{x}_{i} computed by (2.4) and R=R= [-(R_(N)y)^('')(x_(1)),dots,-(R_(N)y)^('')(x_(N-1))]^(T)\left[-\left(R_{N} y\right)^{\prime \prime}\left(x_{1}\right), \ldots,-\left(R_{N} y\right)^{\prime \prime}\left(x_{N-1}\right)\right]^{T} then for the error ||Y-Y_(N)||\left\|Y-Y_{N}\right\| it holds
then for each point x_(i)x_{i} in Delta,i=-q,dots,N+p,|y(x_(i))-y_(i)|=O(h^(k))\Delta, i=-q, \ldots, N+p,\left|y\left(x_{i}\right)-y_{i}\right|=O\left(h^{k}\right).
Proof. The condition ||A^(-1)||L < 1\left\|A^{-1}\right\| L<1 assures us that GG is a contraction. From Banach's fixpoint theorem it follows that (Y^((n)))\left(Y^{(n)}\right) given by (2.7) is convergent to the exact solution bar(Y)\bar{Y} of (2.4) and the following estimation holds
If the accuracy of the collocation method for the BVP is O(h^(k))O\left(h^{k}\right) (that is, the approximate solution yy and its derivative y^(')y^{\prime} at cc and dd are within this accuracy limit), and if the Runge-Kutta method is stable and has the order kk, then the final solution has the same accuracy. The stability and convergence of Runge-Kutta method are guaranteed by Theorems 5.3.1, page 285 and 5.3.2, page 288 in [8]. ◻\square
3 Numerical examples
Our combined method was implemented in MATLAB, using the functions cebdif, cebint and cebdifft contained in dmsuite and described
in [1]. We chose {x_(k),k=0,N}\left\{x_{k}, k=0, N\right\} as extreme Chebysev points and the other of Delta\Delta were computed using the MATLAB solver ode45. Since the successive approximation method is slow, we solve the nonlinear system (2.4) by Newton's method. The MATLAB function solvepolylocalceb solve the nonlinear system and call the Runge-Kutta solver. The derivatives at cc and dd were computed by calling cebdifft.
Let us consider two nonlinear examples.
Example 3.1 [9, page 491]Bratu's equation for lambda=1\lambda=1 :
{:[y^('')+e^(y)=0","quad x in(0","1)],[y(0.2)=y(0.8)=0.08918993462883.]:}\begin{aligned}
y^{\prime \prime}+e^{y} & =0, \quad x \in(0,1) \\
y(0.2) & =y(0.8)=0.08918993462883 .
\end{aligned}
We took N=128N=128 and the tolerance epsi=10^(-10)\varepsilon=10^{-10}. The starting value is y^((0))(x)=(x(1-x))/(4)y^{(0)}(x)=\frac{x(1-x)}{4}. The solution of nonlinear system is obtained after 4 iterations. The graph of the solution is given in Figure 1.
The positive solution of this problem represents the average temperature in a reaction-diffusion process. In [10], the authors proved that, for p=3p=3, the problem (3.1) has a unique positive solution. We consider here the "variant"
In this example, N=128,epsi=10^(-8),y^((0))=(6pi)/(sqrt2)x(1-x)N=128, \varepsilon=10^{-8}, y^{(0)}=\frac{6 \pi}{\sqrt{2}} x(1-x). The desired tolerance was obtained after 8 iterations. Figure 2 shows the graph of the solution.
Figure 1: The solution to Bratu's problem for lambda=1\lambda=1.
Figure 2: The positive solution to average temperature in a reactiondiffusion process.
References
[1] J. A. C. Weideman, S. C. Reddy, A MATLAB Differentiation Matrix Suite, ACM Transactions on Mathematical Software, 26(4), 2000, pp. 465-519.
[2] W. Heinrichs, Strong convergence estimates for pseudospectral methods, Applications of Mathematics, 37(6), 1992, pp. 401-417.
[3] R.D. Russel, L.F.Shampine, A Collocation Method for boundary Value Problems, Numer. Math. 19, pp. 1-28, Springer Verlag, 1972.
[4] Henrici, P., Discrete variable methods in ordinary differential equations, John Willey, New York, pp. 345-384, 1962.
[5] F. Costabile, A. Napoli, A Collocation Method for Global Approximation of General Second Order BVPs, Computing Letters, 3(1), 2007, pp. 23-34.
[6] F. A. Costabile, F. Dell'Accio, On the nonsingularity of a special class of centrosymmetric matrices arising in spectral methods in BVPs, Applied Mathematics and Computation, 206, 2008, pp. 991-993.
[7] A. L. Andrew, Centrosymmetric Matrices, SIAM review, 40(3), 1998, pp. 697-698.
[8] W. Gautschi, Numerical Analysis. An Introduction, Birkhäuser, Boston, Basel, Berlin, 1997.
[9] U. Ascher, R.M. Mattheij, and R.D. Russel, Numerical Solution of Boundary Value Problems for Ordinary Differential Equations, SIAM, 1997.
[10] C. I. Gheorghiu, D. Trif, The Numerical Approximation to Positive Solution for some Reaction-Diffusion Problems, Pu.M.A., 11, 2000, pp. 243-253.