All roots spectral methods: Constraints, floating point arithmetic and root exclusion

Abstract

The nonlinear two-point boundary value problem (TPBVP for short) \[u_{xx}+u^{3}=0,\quad u(0)=u(1)=0,\] offers several insights into spectral methods.

First, it has been proved a priori that \[\int u(x)dx=\frac p{\sqrt{2}}.\] By building this constraint into the spectral approximation, the accuracy of \(N=1\) degrees of freedom is achieved from the work of solving a system with only N degrees of freedom. When N is small, generic polynomial system solvers, such as those in the computer algebra system Maple, can find all roots of the polynomial system, such as a spectral discretization of the TPBVP.

Our second point is that floating point arithmetic in lieu of exact arithmetic can double the largest practical value of N. (Rational numbers with a huge number of digits are avoided, and eliminating M symbols like \(\sqrt{2}\) and p reduces \(N+M\)-variate polynomials to polynomials in just the N unknowns.) Third, a disadvantage of an “all roots” approach is that the polynomial solver generates many roots \(( 3^N-1)\) -for our example – which are genuine solutions to the \(N\)-term discretization but spurious in the sense that they are not close to the spectral coefficients of a true solution to the TPBVP.

We show here that a good tool for “root-exclusion” is calculating \[\rho=\sqrt{\sum\limits_{n=1}^{N}b_{n}^{2}};\] spurious roots have \(\rho\) larger than that for the physical solution by at least an order of magnitude. The \(\rho\)-criterion is suggestive rather than infallible, but root exclusion is very hard, and the best approach is to apply multiple tools with complementary failings.

Authors

John P. Boyd
(Department of Climate & Space Sciences and Engineering, University of Michigan, United States)

Calin-Ioan Gheorghiu
(Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy)

Keywords

Chebyshev polynomials; nonlinear ordinary differential equations; two-point boundary value problem; lemniscate elliptic function; computer algebra.

References

See the expanding block below.

Cite this paper as

J.P. Boyd, C.I. Gheorghiu, All roots spectral methods: Constraints, floating point arithmetic and root exclusion, Applied Mathematics Letters 67 (2017) 28–32
DOI: 10.1016/j.aml.2016.11.015

PDF

About this paper

Journal

Applied Mathematics Letters

Publisher Name

Elsevier

Print ISSN
0893-9659

?

Online ISSN

?

Google Scholar Profile

?

[1] J.P. Boyd, Tracing multiple solution branches for nonlinear ordinary differential equations: Chebyshev and Fourier spectral methods and a degree-increasing spectral homotopy [DISH], J. Sci. Comput. 19 (2016) 1113–1143.
[2] J.P. Boyd, Degree-increasing [N to N + 1] homotopy for Chebyshev and Fourier spectral methods, Appl. Math. Lett. 57 (2016) 77–81.
[3] C.I. Gheorghiu, D. Trif, The numerical approximation to positive solution for some reaction–diffusion problems, Pure Math. Appl.: Math. Optim. 11 (2001) 243–253.
[4] J.P. Boyd, Chebyshev and Fourier Spectral Methods, Dover, New York, 2001.
[5] C.-I. Gheorghiu, Spectral Methods for Differential Problems, Casa Cartii de Stiinta, Cluj-Napoca, Romania, 2007, 157 pp.
Available at http://www.ictp.acad.ro/gheorghiu/spectral.pdf.
[6] B.A. Finlayson, The Method of Weighted Residuals and Variational Principles, second ed., SIAM, New York, 2013.
[7] J.P. Boyd, Chebyshev and Legendre spectral methods in algebraic manipulation languages, J. Symbolic Comput. 16 (1993) 377–399.
[8] J.P. Boyd, Correcting three errors in Kantorovich & Krylov’s approximate methods of higher analysis: Energizing perturbation series and Chebyshev and legendre spectral algorithms with computer algebra, Amer. Math. Monthly 123 (2016) 241–257.

Paper (preprint) in HTML form

1-s2.0-S0893965916303482-main

All roots spectral methods: Constraints, floating point arithmetic and root exclusion

John P. Boyd a , a , ^(a,**){ }^{\mathrm{a}, *}a,, Călin-Ioan Gheorghiu b b ^(b){ }^{\mathrm{b}}b a a ^(a){ }^{\mathrm{a}}a Department of Climate & Space Sciences and Engineering, University of Michigan, 2455 Hayward Avenue, Ann Arbor, MI 48109, United States b b ^(b){ }^{\mathrm{b}}b Romanian Academy, "T. Popoviciu" Institute of Numerical Analysis, Cluj-Napoca, Romania

ABSTRACT

The nonlinear two-point boundary value problem (TPBVP for short)
u x x + u 3 = 0 , u ( 0 ) = u ( 1 ) = 0 , u x x + u 3 = 0 , u ( 0 ) = u ( 1 ) = 0 , u_(xx)+u^(3)=0,u(0)=u(1)=0,u_{x x}+u^{3}=0, u(0)=u(1)=0,uxx+u3=0,u(0)=u(1)=0,
offers several insights into spectral methods. First, it has been proved a priori that 0 1 u ( x ) d x = π / 2 0 1 u ( x ) d x = π / 2 int_(0)^(1)u(x)dx=pi//sqrt2\int_{0}^{1} u(x) d x=\pi / \sqrt{2}01u(x)dx=π/2. By building this constraint into the spectral approximation, the accuracy of N + 1 N + 1 N+1N+1N+1 degrees of freedom is achieved from the work of solving a system with only N N NNN degrees of freedom. When N N NNN is small, generic polynomial system solvers, such as those in the computer algebra system Maple, can find all roots of the polynomial system, such as a spectral discretization of the TPBVP. Our second point is that floating point arithmetic in lieu of exact arithmetic can double the largest practical value of N N NNN. (Rational numbers with a huge number of digits are avoided, and eliminating M M MMM symbols like 2 2 sqrt2\sqrt{2}2 and π π pi\piπ reduces N + M N + M N+MN+MN+M-variate polynomials to polynomials in just the N N NNN unknowns.) Third, a disadvantage of an "all roots" approach is that the polynomial solver generates many roots - ( 3 N 1 ) 3 N 1 (3^(N)-1)\left(3^{N}-1\right)(3N1) for our example - which are genuine solutions to the N N NNN-term discretization but spurious in the sense that they are not close to the spectral coefficients of a true solution to the TPBVP. We show here that a good tool for "root-exclusion" is calculating ρ n = 1 N b n 2 ρ n = 1 N b n 2 rho-=sqrt(sum_(n=1)^(N)b_(n)^(2))\rho \equiv \sqrt{\sum_{n=1}^{N} b_{n}^{2}}ρn=1Nbn2; spurious roots have ρ ρ rho\rhoρ larger than that for the physical solution by at least an order of magnitude. The ρ ρ rho\rhoρ-criterion is suggestive rather than infallible, but root exclusion is very hard, and the best approach is to apply multiple tools with complementary failings.
© 2016 Elsevier Ltd. All rights reserved.

1. Introduction

The two-point boundary-value problem
(1) u x x + u 3 = 0 , u ( 0 ) = u ( 1 ) = 0 (1) u x x + u 3 = 0 , u ( 0 ) = u ( 1 ) = 0 {:(1)u_(xx)+u^(3)=0","quad u(0)=u(1)=0:}\begin{equation*} u_{x x}+u^{3}=0, \quad u(0)=u(1)=0 \tag{1} \end{equation*}(1)uxx+u3=0,u(0)=u(1)=0
has the exact solution
(2) u = 2 K cn ( 2 K ( x 1 / 2 ) , 1 / 2 ) , K = 1.85407 (2) u = 2 K cn ( 2 K ( x 1 / 2 ) , 1 / 2 ) , K = 1.85407 {:(2)u=2Kcn(2K(x-1//2)","1//sqrt2)","quadK=1.85407:}\begin{equation*} u=2 \mathcal{K} \operatorname{cn}(2 \mathcal{K}(x-1 / 2), 1 / \sqrt{2}), \quad \mathcal{K}=1.85407 \tag{2} \end{equation*}(2)u=2Kcn(2K(x1/2),1/2),K=1.85407
where "cn" is the usual Jacobian elliptic function and K = K ( 1 / 2 ) K = K ( 1 / 2 ) K=K(1//sqrt2)\mathcal{K}=K(1 / \sqrt{2})K=K(1/2) is the complete elliptic integral for an elliptic modulus k = 1 / 2 k = 1 / 2 k=1//sqrt2k=1 / \sqrt{2}k=1/2. Boyd showed that the problem is "cryptoperiodic" in the sense the most efficient spectral basis is a Fourier basis [1] because the elliptic function is periodic. Here, we apply a polynomial basis partly for comparison and partly and more importantly because polynomial approximations are more widely used than the Fourier basis.
Although difficult problems require N N NNN very large where N N NNN is the number of degrees of freedom, spectral methods are so powerful that they are often accurate even for very small N N NNN. This makes it possible to bypass Newton's iteration and its ilk by applying a polynomial system solver to find all its zeros. This system of N N NNN equations in N N NNN unknowns is the spectral discretization of the TPBVP and the unknowns are the coefficients of the Chebyshev polynomial approximation [1,2].

2. Imposing an integral constraint

N N NNN can be minimized by incorporating symmetry and boundary conditions into the form of the approximation. Here u ( x ) u ( x ) u(x)u(x)u(x) is symmetric with respect to reflection about the midpoint of the interval [1]; this symmetry can be imposed by including only basis functions with the same symmetry, that is, only even degree (shifted) Chebyshev polynomials, yielding
(3) u N ( x ) = x ( 1 x ) { b 0 + n = 1 N b n T 2 n ( 2 x 1 ) } . (3) u N ( x ) = x ( 1 x ) b 0 + n = 1 N b n T 2 n ( 2 x 1 ) . {:(3)u_(N)(x)=x(1-x){b_(0)+sum_(n=1)^(N)b_(n)T_(2n)(2x-1)}.:}\begin{equation*} u_{N}(x)=x(1-x)\left\{b_{0}+\sum_{n=1}^{N} b_{n} T_{2 n}(2 x-1)\right\} . \tag{3} \end{equation*}(3)uN(x)=x(1x){b0+n=1NbnT2n(2x1)}.
Independent of the coefficients b n , u N ( 0 ) = u N ( 1 ) = 0 b n , u N ( 0 ) = u N ( 1 ) = 0 b_(n),u_(N)(0)=u_(N)(1)=0b_{n}, u_{N}(0)=u_{N}(1)=0bn,uN(0)=uN(1)=0 and u N ( 1 x ) = u N ( x ) u N ( 1 x ) = u N ( x ) u_(N)(1-x)=u_(N)(x)u_{N}(1-x)=u_{N}(x)uN(1x)=uN(x).
Gheorghiu and Trif [3] observed that
(4) 0 1 u ( x ) d x = π / 2 (4) 0 1 u ( x ) d x = π / 2 {:(4)int_(0)^(1)u(x)dx=pi//sqrt2:}\begin{equation*} \int_{0}^{1} u(x) d x=\pi / \sqrt{2} \tag{4} \end{equation*}(4)01u(x)dx=π/2
This can be used to express b 0 b 0 b_(0)b_{0}b0 in terms of the other b n b n b_(n)b_{n}bn. We assume that the two boundary conditions will be incorporated into the algorithm also. Thus, although the approximation contains only N N NNN degrees of freedom, it is a polynomial of degree ( N + 2 N + 2 N+2N+2N+2 ) and thus a polynomial with ( N + 3 N + 3 N+3N+3N+3 ) coefficients.
Table 1 shows that for this smooth problem, albeit a cubically nonlinear problem, the four degree of freedom approximation has a maximum pointwise error which is only 1 / 14 , 600 1 / 14 , 600 1//14,6001 / 14,6001/14,600 of the maximum of u ( x ) u ( x ) u(x)u(x)u(x). The complete Maple listing is given in Table 2.
The theory and practice of low degree polynomial spectral methods are described in Chapter 20 of [4], Chapter 2 of [5], Chapters 1 through 6 of [6], the review [7] and [1,7,8].

3. The downside of exact arithmetic

In our Maple 15 system running in Windows 7, the largest feasible number of degrees of freedom was limited by system crashes to N feasible = 2 N feasible  = 2 N_("feasible ")=2N_{\text {feasible }}=2Nfeasible =2 when exact arithmetic was used as far as possible in the computation. N feasible N feasible  N_("feasible ")N_{\text {feasible }}Nfeasible  was doubled by altering a single line:
(5) r [ m ] > evalf ( r [ m ] ) (5) r [ m ] > evalf ( r [ m ] ) {:(5)r[m]- > evalf(r[m]):}\begin{equation*} r[m]->\operatorname{evalf}(r[m]) \tag{5} \end{equation*}(5)r[m]>evalf(r[m])
where r [ m ] r [ m ] r[m]r[m]r[m] is the m m mmm th equation of Galerkin system and where "evalf" is the Maple command to force replacement of symbols like 2 , π 2 , π sqrt2,pi\sqrt{2}, \pi2,π and Bessel J ( 0 ) Bessel J ( 0 ) Bessel J(0)\operatorname{Bessel} J(0)BesselJ(0) by their floating equivalents. Why does "evalf" make such a difference?
Table 1
Chebyshev coefficients and Errors Relative to the Maximum of u ( x ) u ( x ) = x ( 1 x ) { n = 0 N b n T 2 n ( 2 x 1 ) } u ( x ) u ( x ) = x ( 1 x ) n = 0 N b n T 2 n ( 2 x 1 ) u(x)*u(x)=x(1-x){sum_(n=0)^(N)b_(n)T_(2n)(2x-1)}u(x) \cdot u(x)=x(1-x) \left\{\sum_{n=0}^{N} b_{n} T_{2 n}(2 x-1)\right\}u(x)u(x)=x(1x){n=0NbnT2n(2x1)}. The relative error is max x [ 0 , 1 ] | u ( x ) u N ( x ) | / u ( 1 2 ) max x [ 0 , 1 ] u ( x ) u N ( x ) / u 1 2 max_(x in[0,1])|u(x)-u_(N)(x)|//u((1)/(2))\max _{x \in[0,1]}\left|u(x)-u_{N}(x)\right| / u\left(\frac{1}{2}\right)maxx[0,1]|u(x)uN(x)|/u(12) where max x [ 0 , 1 ] ( u ( x ) ) = u ( 1 2 ) = 3.708 max x [ 0 , 1 ] ( u ( x ) ) = u 1 2 = 3.708 max_(x in[0,1])(u(x))=u((1)/(2))=3.708\max _{x \in[0,1]}(u(x))=u\left(\frac{1}{2}\right)=3.708maxx[0,1](u(x))=u(12)=3.708.
n n nnn N = 2 N = 2 N=2N=2N=2 N = 3 N = 3 N=3N=3N=3 N = 4 N = 4 N=4N=4N=4
b 0 b 0 b_(0)b_{0}b0 11.90708 11.79043820 11.811570769
b 1 b 1 b_(1)b_{1}b1 -2.282088 -2.50489438 -2.463695553
b 2 b 2 b_(2)b_{2}b2 0.610266 0.42352194 . 46135228
b 3 b 3 b_(3)b_{3}b3 - -0.107929300 -0.076759264
b 4 b 4 b_(4)b_{4}b4 - - 0.01820037
1/(relative error) 424 2500 14,600
n N=2 N=3 N=4 b_(0) 11.90708 11.79043820 11.811570769 b_(1) -2.282088 -2.50489438 -2.463695553 b_(2) 0.610266 0.42352194 . 46135228 b_(3) - -0.107929300 -0.076759264 b_(4) - - 0.01820037 1/(relative error) 424 2500 14,600| $n$ | $N=2$ | $N=3$ | $N=4$ | | :--- | :--- | :--- | :--- | | $b_{0}$ | 11.90708 | 11.79043820 | 11.811570769 | | $b_{1}$ | -2.282088 | -2.50489438 | -2.463695553 | | $b_{2}$ | 0.610266 | 0.42352194 | . 46135228 | | $b_{3}$ | - | -0.107929300 | -0.076759264 | | $b_{4}$ | - | - | 0.01820037 | | 1/(relative error) | 424 | 2500 | 14,600 |
Table 2
Maple listing.
restart; with(LinearAlgebra): with(plots): with(orthopoly);
with(numapprox): interface(rtablesize=infinity); Digits: $=16$;
$\mathrm{N}:=3$; \# number of unknown coefficients;
$\mathrm{u}:=\mathrm{x} *(1-\mathrm{x}) *(\mathrm{~b}[0]+\mathrm{b}[1] * \mathrm{~T}(2,2 * \mathrm{x}-1)+\mathrm{b}[2] * \mathrm{~T}(4,2 * \mathrm{x}-1)+\mathrm{b}[3] * \mathrm{~T}(6,2 * \mathrm{x}-1)) ;$
\#b[0] is a function of b[1]..b[N] through the constraint;
uint $:=\operatorname{int}(\mathrm{u}, \mathrm{x}=0 . .1): \mathrm{b}[0]:=\operatorname{solve}($ uint - Pi $/ \operatorname{sqrt}(2), \mathrm{b}[0])$;
resid:= evalf( simplify( diff( $\mathrm{u}, \mathrm{x}, \mathrm{x})+\mathrm{u} * * 3$ ));
\# Galerkin: "r[m]" is the projection of the residual onto;
\# the first N Legendre functions, weighted by $\mathrm{x} *(1-\mathrm{x})$;
$\mathrm{w}:=\mathrm{x} *(1-\mathrm{x})$; for m from 1 to N do
$\mathrm{r}[\mathrm{m}]:=\mathrm{evalf}(\operatorname{simplify}(\operatorname{int}(\mathrm{w} * \mathrm{P}(2 * \mathrm{~m}-2,2 * \mathrm{x}-1) * \mathrm{resid}, \mathrm{x}=0 . .1))) ; \operatorname{od}$ :
    bsols:= solve(\{r[1],r[2],r[3]\}, \{b[1],b[2],b[3]\});
    nsols:= numelems( [ bsols] ); \# number of polynomial solutions;
\# Process each solution and compute rho(b[1],...b[N]) for each;
for j from 1 to nsols do solseq: =evalf( allvalues(bsols[j]));
        for k from 1 to N do bsol[k]:=rhs( solseq[k]); od:
        $\mathrm{b}[0]:=$ simplify (b[0]); \# compute "rho" ;
    rhoa[j]:=evalf( $\operatorname{sqrt}(\operatorname{sum}(\operatorname{abs}(\operatorname{bsol}[\mathrm{kkk}] * * 2), \mathrm{kkk}=1 . . \mathrm{N}))) ; \operatorname{od}:$
To maximize accuracy, it is desirable to delay floating point arithmetic as long as possible. (It is not possible to write an analytical solution to a polynomial system except for very low degree or for special cases, so the final step of solving the polynomial system must use floating point arithmetic.) However, exact arithmetic forces Maple to treat each component of the system as a polynomial in ( N + M N + M N+MN+MN+M ) variables where here M = 2 M = 2 M=2M=2M=2 and the extra variables are 2 2 sqrt2\sqrt{2}2 and π π pi\piπ. As N N NNN increases, the rational numbers in exact arithmetic grow very rapidly to D D D\mathcal{D}D digits over D D D^(')\mathcal{D}^{\prime}D digits where D D D\mathcal{D}D and D D D^(')\mathcal{D}^{\prime}D quickly explode to hundreds of digits, then thousands, then system crashes [insufficient memory]. In contrast, the floating point expression is an N N NNN-variate polynomial with a fixed number of digits.
Table 1 shows that shifting to floating point arithmetic sooner rather than later does not prevent extremely accurate results from small N N NNN. Doubling N feasible N feasible  N_("feasible ")N_{\text {feasible }}Nfeasible  reduces the error by a factor of about thirty-five.

4. Root-exclusion by radius in hyperspace

The set of Chebyshev coefficient unknowns, { b n } , n = 1 , 2 , , N b n , n = 1 , 2 , , N {b_(n)},n=1,2,dots,N\left\{b_{n}\right\}, n=1,2, \ldots, N{bn},n=1,2,,N define an N N NNN-dimensional Euclidean hyperspace. The radial distance from the origin is the L 2 L 2 L_(2)L_{2}L2 norm of the coefficient vector
(6) ρ n = 1 N b n 2 (6) ρ n = 1 N b n 2 {:(6)rho-=sqrt(sum_(n=1)^(N)b_(n)^(2)):}\begin{equation*} \rho \equiv \sqrt{\sum_{n=1}^{N} b_{n}^{2}} \tag{6} \end{equation*}(6)ρn=1Nbn2
A polynomial system that is cubically nonlinear in N N NNN unknowns will generically have 3 N 3 N 3^(N)3^{N}3N solutions by Bézout's Theorem - 81 roots for N = 4 N = 4 N=4N=4N=4. Most polynomial roots have nothing to do with accurate
Table 3
L 2 L 2 L_(2)L_{2}L2 norms ρ ρ rho\rhoρ of solutions in the N N NNN-dimensional space of spectral coefficients. ρ true ρ true  rho_("true ")\rho_{\text {true }}ρtrue  is distance of accurate approximations of the one true TPBVP solution from the origin in coefficient space. ρ min , spurious ρ min  ,  spurious  rho_("min "," spurious ")\rho_{\text {min }, \text { spurious }}ρmin , spurious  is the radial coordinate of the root nearest the origin which is a spurious solution, useless as an approximation to the Chebyshev coefficients of a solution to the differential equation. The third column is the ratio of these two.
N N NNN ρ true ρ true  rho_("true ")\rho_{\text {true }}ρtrue  ρ min , spurious ρ min  ,  spurious  rho_("min "," spurious ")\rho_{\text {min }, \text { spurious }}ρmin , spurious  ρ min , spuriours / ρ true ρ min  ,  spuriours  / ρ true  rho_("min "," spuriours ")//rho_("true ")\rho_{\text {min }, \text { spuriours }} / \rho_{\text {true }}ρmin , spuriours /ρtrue 
2 2.36 28.41 12.04
3 2.54 16.6 6.5
4 2.54 10.54 4.1
N rho_("true ") rho_("min "," spurious ") rho_("min "," spuriours ")//rho_("true ") 2 2.36 28.41 12.04 3 2.54 16.6 6.5 4 2.54 10.54 4.1| $N$ | $\rho_{\text {true }}$ | $\rho_{\text {min }, \text { spurious }}$ | $\rho_{\text {min }, \text { spuriours }} / \rho_{\text {true }}$ | | :---: | :--- | :--- | :---: | | 2 | 2.36 | 28.41 | 12.04 | | 3 | 2.54 | 16.6 | 6.5 | | 4 | 2.54 | 10.54 | 4.1 |
Fig. 1. Each column is the ratio of the radial coordinates of solutions to the Galerkin polynomial system to the "true" solution which is an accurate approximation to the spectral coefficients of the TPBVP. Each symbol represents a different solution; 9 circles for each of the 9 roots for N = 2 N = 2 N=2N=2N=2, etc. Complex conjugate roots are superimposed so that the 9 circles for N = 2 N = 2 N=2N=2N=2 appear as only 6 .
approximations to the underlying problem. Indeed, our example has a single unique solution. But how do we identify the one true solution from the blizzard of false roots?
This is a hard problem. Usually complex-valued roots can be junked immediately, but many false solutions are real-valued. Solutions that, when substituted into the spectral series, give a function u ( x ) u ( x ) u(x)u(x)u(x) that violates physics, such as heat flux in the wrong direction or a negative mass density, are also rapidly discarded.
For our problem ρ ρ rho\rhoρ is a good way to separate bad roots from the good ones because the L 2 L 2 L_(2)L_{2}L2 norm ρ ρ rho\rhoρ of the vector of Chebyshev coefficients for the true TPBVP solution is an order of magnitude smaller than that of all the spurious roots as cataloged in Table 3 and Fig. 1. It is perfectly possible that a problem may have arbitrarily large amplitude branches. Indeed, the similar Bratu's problem, u x x + λ exp ( u ) = 0 u x x + λ exp ( u ) = 0 u_(xx)+lambda exp(u)=0u_{x x}+\lambda \exp (u)=0uxx+λexp(u)=0 with u ( 1 ) = u ( 1 ) u ( 1 ) = u ( 1 ) u(-1)=u(1)u(-1)=u(1)u(1)=u(1) is known to have such a branch. However, very large amplitude solutions are unlikely to be accurately represented by a small number of spectral coefficients. Large norm solutions of a small- N N NNN polynomial system will not be useful even if the differential equation has multiple branches.

5. Newton's iteration

Gheorghiu and Trif show that Newton's iteration diverges for their finite element discretization of our problem unless the integral constraint is incorporated into the iteration [3]. A priori constraints are blessings to many methods.

6. Summary

When a low degree spectral approximation is acceptably accurate, Newton's iteration can be bypassed in favor of direct solution of the nonlinear polynomial system which is the discretization. Building an a priori constraint into the assumed solution greatly reduces error without increasing computer time.
Rejecting exact rational arithmetic in favor of floating point arithmetic can greatly increase feasible values of N N NNN. It is one of the glories of computer algebra that factors of π π pi\piπ and 2 2 sqrt2\sqrt{2}2 can be carried through computations as symbols, but this is expensive, and counterproductive if the final answer must be floating point numbers anyway.
Identifying spurious branches of a nonlinear discretization is an essential but very difficult task. We offer a simple new strategy, suggestive rather than infallible, which is to reject solutions whose coefficient vectors have large norms (radial coordinate ρ ρ rho\rhoρ ).

Acknowledgments

We thank the referee and U.S. National Science Foundation DMS-1521158.

References

[1] J.P. Boyd, Tracing multiple solution branches for nonlinear ordinary differential equations: Chebyshev and Fourier spectral methods and a degree-increasing spectral homotopy [DISH], J. Sci. Comput. 19 (2016) 1113-1143.
[2] J.P. Boyd, Degree-increasing [ N N NNN to N + 1 N + 1 N+1N+1N+1 ] homotopy for Chebyshev and Fourier spectral methods, Appl. Math. Lett. 57 (2016) 77-81.
[3] C.I. Gheorghiu, D. Trif, The numerical approximation to positive solution for some reaction-diffusion problems, Pure Math. Appl.: Math. Optim. 11 (2001) 243-253.
[4] J.P. Boyd, Chebyshev and Fourier Spectral Methods, Dover, New York, 2001.
[5] C.-I. Gheorghiu, Spectral Methods for Differential Problems, Casa Cartii de Stiinta, Cluj-Napoca, Romania, 2007 , 157 pp. Available at http://www.ictp.acad.ro/gheorghiu/spectral.pdf.
[6] B.A. Finlayson, The Method of Weighted Residuals and Variational Principles, second ed., SIAM, New York, 2013.
[7] J.P. Boyd, Chebyshev and Legendre spectral methods in algebraic manipulation languages, J. Symbolic Comput. 16 (1993) 377-399.
[8] J.P. Boyd, Correcting three errors in Kantorovich & Krylov's approximate methods of higher analysis: Energizing perturbation series and Chebyshev and legendre spectral algorithms with computer algebra, Amer. Math. Monthly 123 (2016) 241-257.

    • Corresponding author.
    E-mail addresses: jpboyd@umich.edu (J.P. Boyd), ghcalin@ictp.acad.ro (C.-I. Gheorghiu).
2017

Related Posts