Return to Article Details About B-splines. Twenty answers to one question: What is the cubic B-spline for the knots -2,-1,0,1,2?

About B-splines. Twenty answers to one question: What is the cubic B-spline for the knots -2,-1,0,1,2?

Ewald Quak

February 9, 2016.

October 22, 2004. SINTEF Applied Mathematics, Oslo, Norway.
This article by the late Ewald Quak is published with the consent of his wife Riina and upon the initiative of a member of the editorial board of this journal. It is followed by some comments of Carl de Boor on early Bulgarian and Romanian contributions to B-splines.

In this composition an attempt is made to answer one simple question only: What is the cubic B-spline for the knots -2,-1,0,1,2? The note will take you on a most interesting trip through various fields of Mathematics and finally convince you on how little we know.

MSC. 41-XX, 65-XX, 60-XX

Keywords. cubic B-spline, curves and surfaces, CAD-CAM, Approximation Theory, Numerical Analysis, Probability.

1 Introduction

Polynomial splines are real-valued functions which consist of polynomial pieces that are glued together at given breakpoints with prescribed smoothness. To name just a few applications of these powerful tools within applied and computational mathematics: splines are a de-facto standard in the mathematical description of curves and surfaces for industrial computer-aided design and manufacture (CAD-CAM). Spline surfaces and their offspring subdivision surfaces are at the heart of professional animation systems for the production of movies like Toy Story or Monsters, Inc. Splines are also used to solve ordinary differential equations and in the multivariate form of finite elements they form the cornerstone for the numerical treatment of partial differential equations. The approximation of scattered data, be it in geometric modelling or statistics, can be carried out with splines. Wavelet theory and multiresolution analysis owe a lot to spline functions as well.

\includegraphics[scale=0.5]{bspline.png}
Figure 1 The graph of the cubic B-spline for the knots -2,-1,0,1,2.

So both faculty and students are bound to encounter splines in very different settings and basic spline material is to some extent incorporated in undergraduate numerical analysis text books. In this paper I would like to give some examples how a certain B-spline basis function can be characterized. Some of these examples are well-known and at the heart of important theory or applications, other disguises are rather far-fetched, and could be considered as dragged from mathematical oblivion. I do not claim that this list is in any way complete, and I invite all readers to supply me with other approaches they have encountered.

In this text, I have deliberately chosen to focus on cubic splines and consequently not formulated any statements in the highest possible generality. In this way I hope to generate more questions rather than give complete answers, stimulating the reader’s interest to take a closer look –for example at other spline degrees or different choices of breakpoints. Some remarks in this direction are given in the final section of the paper. I tried to hold the proofs as elementary as possible, but still giving a flavor of what is happening in more general situations, avoiding - if possible - straightforward calculations just valid for the cubic case. As for the sources that inspired my twenty answers, I tried to quote early original papers, but are not in a position to claim absolute historical accuracy concerning who did what first. Instead the reader is invited to consult on his or her own more detailed spline monographs, or study books dealing with a specific application area using splines. Be aware, however, that it was of course necessary to change original notations and also proofs to generate a consistent presentation.

The twenty choices are listed in Section 2, including the references that inspired them. The first answer is discussed in Section 3 and allows to say a bit where the name spline originated. Using yet another definition we are able to address in Section 4 some basic properties in items 2, 3 and 4, including the basis property that motivated the term B-spline as in basis spline. In the subsequent Section 5, divided differences of truncated power functions allow us to come up with a closed formula expression, point out connections to Green’s functions and Peano kernels, and give a geometric interpretation in items 5-8. Fourier transforms are employed in Section 6 for items 9-11. Subdivision concepts, important in wavelet theory and geometric modeling, are then showcased in Section 7 for items 12 and 13. Section 8 is devoted to probability interpretations in items 14-17, including the historic reference from 1904, where the graph of a cubic B-spline (probably) appears for the first time, years before the term B-spline was coined. Also the last items 18-20 covered in Section 9 stem from roots in probability, but are singled out because they originated in the author’s country of residence. The final Section 10 tries to give some information how the general spline theory can be developed from the cubic approach in this elementary paper and where to find the relevant information. Enjoy!

2 Twenty choices

  • [ 44 , 27 ] The function f which minimizes the integral

    22(g(x))2dx,

    when considering all functions g, whose second derivatives are square integrable over the interval [2;2] and which interpolate the following data:

    g(2)=g(2)=g(2)=g(2)=0, g(1)=g(1)=16, g(0)=23?
  • [ 38 ] The function f defined by the pieces

    f(x)={16(x+2)3,if x[2,1],16((x+2)34(x+1)3),if x[1,0],16((2x)34(1x)3),if x[0,1],16(2x)3,if x[1,2],0,otherwise.?
  • [ 13 ] The functionf of smallest support such that f is a cubic polynomial on each interval [k;k+1] for each kZ, twice continuously differentiable everywhere, even and for which f(0)=23?

  • [ 2 , 12 ] The function f built from characteristic functions as

    f(x)=x+23(x+22((x+2)X|[2,1)(x)+(x)X|[1,0)(x))+1x2((x+1)X|[1,0)(x)+(1x)X|[0,1)(x)))+2x2(x+12((x+1)X|[0,1)(x)+(1x)X|[0,1)(x))+2x2((x)X|[0,1)(x)+(2x)X|[1,2)(x))).

    where for k=2,1,0,1

    X|[k,k+1)={1 if\ x[k,k+1),0 if\ x[k,k+1)?
  • [ 38 ] The function f given by

    f(x)=16((2x)+34(1x)+3+6(x)+34(1x)+3+(2x)+3),

    where the truncated power functions are defined as x+=max(x,0) and x+3=(x+)3?

  • [ 5 ] The function f obtained by taking the divided difference with respect to t in the points 2, 1,0,1,2 of Green’s function associated with the differential operator D4 and multiplying the result by 24? This means

    f(x)=24[2,1,0,1,2]tG4(t,x),

    where for any given function h which is integrable on [2,2] and any real numbers g0, g1, g2, g3 the function

    g(t)=j=03gj(t+2)jj!+22G4(t,x)h(x)dx

    solves the initial value problem

    g(4)(t)=h(t),\ \ \ \ \ almost everywhere in [2,2],g()(2)=g, =0,1,2,3.
  • [ 13 ] The function f for which

    +f(x)h(4)(x)dx=4h(0),

    for all functions h, which are four times continuously differentiable on [2,2], and where 4h(x) is the fourth order central difference defined as

    4h(x)=(((h(x)))),\ with h(x)=h(x+12)h(x12)?
  • [ 13 ] The function f so that f (x) represents the 3-dimensional volume of the intersection of a hyperplane in R4 that is orthogonal to the x-axis and goes through the point (x,0,0,0) with a 4-simplex of volume 1, which is placed in R4, so that its five vertices project orthogonally onto the points on the x-axis (2,0,0,0),(1,0,0,0),(0,0,0,0),(1,0,0,0), (2,0,0,0)?

  • [ 38 ] The function f whose Fourier transform is the 4th power of a dilated sinc function, i.e., using the inverse Fourier transform

    f(x)=12π+(sin(u2)u2)4eixudu?
  • [ 38 ] The function f which results when taking the convolution of the hat function

    N(x)={x+1, if\ x[1,0],1x, if\ x[0,1],0, otherwise,

    with itself, i.e.,

    f(x)=(NN)(x)=+N(xy)N(y)dy,xR?
  • [ 36 , 28 ] The function f generated in the following way?

    In R2 a unit square is placed with its lower edge on the x-axis and then moved along this axis. Assuming the square’s center at time x to be the point (x,12), we obtain a function A such that A(x) is the area of the region cut out of the square by the graph of the hat function N(x) (see item 10) and the x-axis. Repeating this process one more time, we obtain f(x) as the area of the region cut out of the square by the graph of the function A(x) and the x-axis.

  • [ 8 ] The continuous function f which has the values f(0)=23, f(1)=f(1)=16, and 0 in all other integers and satisfies the functional equation

    f(x)=18(f(2x+2)+4f(2x+1)+6f(2x)+4f(2x1)+f(2x2))?
  • [ 34 , 9 ] The continuous function f which results from the following limit process?

    We start with the polygon in R2 that connects the initial points Pk0=(k,δk,0). Then we generate new polygons successively by setting for rN{0} and any kZ

    P2kr+1=18(Pk1r+6Pkr+Pk+1r),P2k+1r+1=12(Pkr+Pk+1r).

    The resulting polygons converge to f in the sense that for rN{0} and any kZ

    (k2r,f(k2r))Pkr1312r.
  • [ 25 ] The function f describing the discrete probability distribution for the following urn model?

    Consider an urn initially containing w white balls and b black balls. One ball at a time is drawn at random from the urn and its color is inspected. Then it is returned to the urn and w+b balls of the opposite color are added to the urn. A total of three such draws are carried out. With t=w/(w+b) as the probability of selecting a white ball on the first trial, the probability of selecting exactly 3 white balls in the three trials is f (t2), of selecting exactly 2 white balls in the three trials is f (t1), of selecting exactly 1 white ball in the three trials is f(t) and of selecting exactly 0 white balls in the three trials is f(t+1).

  • [ 38 , 43 ] The function f that is the density distribution of the error committed in the sum X1+X2+X3+X4 of 4 independent real random variables Xk, if each variable is replaced by its nearest integer value?

  • [ 43 ] The function f, such that f(x) is the 3-dimensional volume of the section of the 4-dimensional unit cube [12,12]4 cut out by the hyperplane x1+x2+x3+x4=x, which lies normal to the cube’s main diagonal (connecting (12,12,12,12) to (12,12,12,12))?

  • [ 43 ] The function f whose graph appears in the contribution of A. Sommerfeld to the Festschrift Ludwig Boltzmann gewidmet zum 60. Geburtstage, the monograph dedicated to the 60th birthday of Ludwig Boltzmann, February 20th 1904?

  • [ 6 ] The function f so that

    f(x)=1πx12x+12t312t3+12t212t2+12(Q(t1+12)Q(t112))dt1dt2dt3,

    where

    Q(t)=limλ0+arctan(tλ)?
  • [ 23 ] The function f given by the real part of a complex integral in the following way:

    f(x)=8πRe0+u3(42+(u2ix)2)(22+(u2ix)2)(u2ix)du?
  • [ 23 ] The function f represented as

    f(x)=112k=04(1)k(4k)|x+2k|3sign(x+2k)4?

3 Prelude: why is a spline called a spline?

One of the most important and straightforward things one wishes to do in numerical analysis is to interpolate given values using some simple basic functions, for example algebraic or trigonometric polynomials. Concerning interpolation by algebraic polynomials, however, it has been known for over a hundred years [ 35 ] that an interpolant can display enormous oscillations, even if the data is taken from a smooth well-behaved function. One of the ideas used to counter this effect is to consider functions that consist of various simple pieces which are glued together with a given smoothness.

3.1 The complete cubic spline interpolant

Here we will try cubic polynomial pieces connected in such a way that they form a function that is twice continuously differentiable everywhere. The places where different pieces meet are typically called breakpoints or knots. In this sense let us consider the following example.

(Interpolation) Given the five abscissae t0<t1<t2<t3<t4, we wish to find an interpolating cubic spline function s, meaning that

sC2[t0,t4],s|[tk,tk+1]Π3\ \ for\ \ k=0,1,2,3,s(tk)=gk\ \ \ for\ \ k=0,1,2,3,4

for prescribed real values gk.

Is this problem even well-posed in the sense that there exists a unique solution amongst all the piecewise cubic C2 functions on [t0,t4] with interior breakpoints in t1<t2 <t3? The function s must be a cubic polynomial p on the interval [t0,t1], giving us 4 parameters. Since we have C2 continuity in the point t1, we must have s(x)=p(x)+a1(xt1)3 on the interval [t1,t2], and analogously s(x)=p(x)+a1(xt1)3+a2(xt2)3 on [t2,t3] and s(x)=p(x)+a1(xt1)3+a2(xt2)3+a3(xt3)3 on [t3,t4]. This means we have in total seven parameters and five interpolation conditions to satisfy, so if – as we just expect, but do not know precisely right now – the conditions are independent, we still have two parameters to fix by other conditions. There are many choices for the additional two conditions, typically involving the two endpoints of the interval. One such choice is s(t0)=s(t4)=0 [ 27 ] , but that actually influences the approximation quality in the endpoints negatively ( [ 3 ] , p. 55). Another prominent choice, which we consider here, is to prescribe Hermite data in the endpoints [ 44 ] , i.e., demanding in addition

s(t0)=g0\ \ and \ s(t4)=g4,
3.2

yielding what is called a complete cubic spline interpolant s.

To show properly the existence and uniqueness of the complete spline interpolant, we can proceed as in the early days of splines and set directly

s|[tk,tk+1](x)=Ak+Bk(xtk)+Ck(xtk)2+Dk(xtk)3, k=0,1,2,3.

We get from the interpolation conditions (3.1) and the continuity in the break-points, with hk=tk+1tk, for k=0,1,2,3

Ak=gkhkBk+hk2Ck+hk3Dk=gk+1gk.

The continuity of the first derivatives then yields

Bk+2hkCk+3hk2Dk=Bk+1\ for k=0,1,2,
3.5

and the continuity of the second derivative

2Ck+6hkDk=2Ck+1\ for k=0,1,2.
3.6

Using as principal unknowns the Ck=12s(tk) for k=0,1,2,3 as well as C4 as an additional variable, we get for k=0,1,2 from (3.6) k=3

Dk=Ck+1Ck3hk,\quad for k=0,1,2,3.
3.7

Then (3.4) yields

Bk=gk+1gkhk13hk(2Ck+Ck+1),\quad for k=0,1,2,3.
3.8

Finally, the equations (3.8) and (3.7) substituted into (3.5) yield the linear system

(h02(h0+h1)h1000h12(h1+h2)h2000h22(h2+h3)h3)(C0C1C2C3C4)=3gI

with the right hand side

gI=(g2g1h1g1g0h0,g3g2h2g2g1h1,g4g3h3g3g2h2)T.

The two additional Hermite conditions (3.2) in the endpoints mean that

B0=g0\ and B3+2h3C3+3h32D3=g4,

resulting in the final tridiagonal 5×5 system

(2h0h00000h02(h0+h1)h1000h12(h1+h2)h2000h22(h2+h3)h3000h32h3)(C0C1C2C3C4)=3g
3.9

with

g=(g1g0h0g0,gI,g4g4g3h3)T.

The coefficient matrix is diagonally dominant, and so there exists a unique solution of Cks, from which all the other parameters can be computed, yielding a unique complete cubic spline interpolant s for the prescribed interpolation conditions (3.1) and (3.2). Clearly, the whole deduction can be carried out for any number n of distinct interior breakpoints and Hermite endpoint conditions.

3.2 An extremal property

The complete cubic spline interpolant s has a very special extremal property. If we take any other function g that possesses a square-integrable second derivative on [t0,t4] and interpolates the given function values gk in the abscissae tk, and in addition the derivative values g0 and g4 in the endpoints, we have the following integral relation between s and g

t0t4(g(x)s(x))s(x)dx=0,
3.10

because integration by parts yields

t0t4(g(x)s(x))s(x)dx=t0t4(g(x)s(x))s(x)dx,

as the remaining terms vanish due to the derivative interpolation (3.2) in the interval endpoints. Since s is piecewise constant, namely 6Dk on the interval [tk,tk+1], we can split the integral to obtain

t0t4(g(x)s(x))s(x)dx=k=036Dktktk+1(g(x)s(x))dx=k=036Dk((g(tk+1)s(tk+1))(g(tk)s(tk)))=0

due to the interpolation conditions (3.1), establishing (3.10). This relation, however, implies for any such interpolatory g a Pythagorean integral equation

t0t4(g(x))2dx=t0t4(g(x)s(x))2dx+t0t4(s(x))2dx,

so that

t0t4(s(x))2dxt0t4(g(x))2dx
3.11

with equality only if g=s.

So s is the unique function among all the interpolants for the specific interpolation conditions (3.1) and (3.2) which possesses minimal 2-norm of the second derivative. Again, this holds for all sets of distinct breakpoints in an interval [a,b].

Note that the integral relation (3.10) and thus the extremal property (3.11) also hold in the case, where the Hermite boundary interpolation conditions (3.2) are replaced by the conditions s(t0)=s(t4)=0 [ 27 ] , and with suitable boundary conditions similar properties can be derived for other odd spline degrees [ 1 ] .

3.3 This why a spline is called a spline

Finally, we can address the title question of this section. From differential geometry we know that the curvature of a curve, which is given as the graph (x,g(x)) of a smooth function g over an interval [a,b], can be described locally by the formula

κ(x)=g(x)(1+(g(x))2)3/2.

Instead of minimizing the strain energy κ2(x)dx of such a curve that is to pass through given points, one can simplify by just minimizing (g(x))2dx. From the previous subsection we know that the optimal function in this latter case is in fact the cubic spline interpolant. Keep in mind, though, that only if |g| is really small, then the curvature κ is actually close to g.

Certain instruments used in naval architecture for the construction of a smooth curve through given points were called splines, and it is for this reason the name spline was introduced in the pioneering work of I. J. Schoenberg [ 38 ] , p. 48: For k=4 (polynomial order = degree +1) they (the curves) represent approximately the curves drawn by means of a spline (instrument) and for this reason we propose to call them spline curves of order k.

Note that the use of polynomial order instead of degree has some advantages which we will come back to in later sections. Figure 2 shows the reconstruction of a mechanical spline instrument, produced by the Norwegian Maritime Museum for the University of Oslo.

\includegraphics{figpage11.jpg}
Figure 2 A mechanical spline.

3.4 A special case (Item 2.1)

Using the function values and first derivatives given in item 2.1, we obtain from the general case (3.9) the specific tridiagonal system of conditions

(2100014100014100014100012)(C0C1C2C3C4)=(1213112).

yielding the solution C0=C4=0, C1=C3=12 and C2=1.

As the piecewise representation of the complete cubic spline interpolant s in this (very) particular case we thus obtain

s(x)={16(x+2)3if \ x[2,,1],16+12(x+1)+12(x+1)212(x+1)3 if \ x[1,0],23x2+12x3if \ x[0,1],1612(x1)+12(x1)216(x1)3 if \ x[1,2].
3.12

Note that we actually get here sC2(R) , if we take s to be identically zero outside of the interval [2,2]. For a graph of this function take a look back at the front page.

4 One possible definition and some properties

Somehow the introduction in the previous section is not exactly to the point. Although it shows clearly why cubic splines as a class of functions are distinguished for certain interpolation problems through the extremal property (3.11), it does not really single out one specific function amongst all splines. Why should the function values and endpoint derivatives in item 2.1 lead to any kind of special spline function? Shouldn’t we rather think of a spline that is equal to one in one breakpoint and zero in all others? We see that if we add more break-points and corresponding interpolation conditions, we increase the size of the system of equations (3.9), though it will still be tridiagonal and uniquely solvable. As a consequence, however, the complete cubic spline interpolant will generally consist of many pieces and be globally supported from endpoint to endpoint. Thus in total item 2.1 does not seem like such a good way to define spline basis functions. In fact, what happened is the following: I wanted to introduce the general extremal property of splines and thus explain the name in the beginning, but also had to get exactly the answer (3.16). So in a way I still owe you a proper definition of a cubic B-spline for the knots 2,1,0,1,2.

4.1 A definition of a cubic B-spline

At the end of the previous section, we have already seen that the function s in (3.16) is actually everywhere twice differentiable if we extend it by setting it to zero outside of the interval [2,2]. That way s is in fact a cubic polynomial over each interval [k,k+1], kZ, not just for k=2, 1,0,1. The function has just four nontrivial pieces, and for numerical purposes it would be very advantageous to operate only with small supports and a small number of pieces. So far, however, all these properties are satisfied also for a multiple cs, for any nonzero constant c. A prescribed function value in the center of the support, x=0, is one way to fix this constant. Why exactly the normalization in the following definition was picked, will become clear a little later. Now let’s go.

Definition 4.1

(Cubic B-spline) We introduce the cubic B-spline B for the knots (or breakpoints) 2,1,0,1,2 by demanding that it satisfies the following properties:

  • B is a real-valued function defined for all xR.

  • B has only local support, namely the interval [2,2].

  • On each subinterval [k,k+1] for kZ the function B is a cubic polynomial: B|[k,k+1]Π3.

  • The six polynomial pieces, i.e., including the two identically zero ones on [,2) and (2,), are fitted together so that the function B is twice continuously differentiable everywhere: BC2(R).

  • Finally we impose that the function B attains a specific value in the center of its support, namely B(0)=23.

Ignoring the results of the previous section, we would just naively register that we have a total of sixteen free parameters, since we have the four cubic polynomial pieces over [2,2], each providing four parameters. On the other hand, we have fifteen conditions in the form of three continuity conditions for each of the five active breakpoints 2,1,0,1,2,, which heuristically explains why we need a final condition to obtain a unique solution.

4.2 An explicit formula (Item 2.2)

Starting from scratch, we determine the polynomial pieces of B explicitly this time. The C2 continuity in 2 and 2 imposes, since B is identically zero outside [2,2], that

B|[2,1](x)=a2(x+2)3,B|[1,2](x)=a1(2x)3

for some factors a2 and a1, while the C2 continuity in 1 and 1 leads to

B|[1,0](x)=a2(x+2)3+a1(x+1)3,B|[0,1](x)=a1(2x)3+a0(1x)3

for some a1 and a0. The C2 continuity in 0 yields – not too surprisingly – the symmetry that a2=a1 and a1=a0 as well as a0=4a1.

Thus the cubic B-spline B is indeed given up to a constant factor by the continuity conditions imposed in the knots and the prescribed function value in 0 fixes this constant to achieve the representation of item 2.2, with the polynomial pieces of course just the same as those already stated in (3.16). Note that

B(x)>0\ for x(2,2).

Item 2.2 can be found in formula (4.2) on page 71 of [ 38 ] , but there it is derived in a completely different way, which we will come back to later in Section 6.

4.3 The basis property

Next we see that for kZ the translated functions B(k) are also in C2(R), have support [k2,k+2] and each consist of four cubic polynomial pieces within their supports. Thus it is possible to look at the whole set of functions {B(k)}kZ instead of only B. Then any function represented as

kZckB(k)
4.1

is a real-valued piecewise cubic C2 function on all of R with breakpoints in the integers Z. Note that due to the local supports, the sum for any xR only contains at most four terms for which the corresponding function value of B-spline translates is nonzero, so we need not worry about the convergence of an infinite series here.

Do we cover all real-valued piecewise cubic C2 functions with breakpoints in the integers this way, meaning is it possible to write any such function g in the form given in (4.1)? The answer is yes, justifying the term B-spline – as for basis spline – introduced by Schoenberg in [ 39 ] , p. 256: It was shown in ( [ 13 ] ) that spline functions in an interval (a,b), finite or infinite, with prescribed knots, admit a unique representation in terms of so-called fundamental spline functions, which we now call B-splines. The master was of course referring to arbitrary knots and polynomial degrees, but we are content to investigate the specific situation at hand.

Let us start by looking at the interval [0,1]. Only the four translates B(+1),B(),B(1) and B(2) do not vanish on this interval. Denoting pk=B(k)|[0,1] for k=1,0,1,2, we have for x[0,1] from item 2.2

p1(x)=16(1x)3,     p0(x)=16((2x)34(1x)3),p1(x)=16((x+1)34x3),   p2(x)=16x3.

An instructive way to check that the four polynomials pk are linearly independent is to show that the monomial basis up to degree 3 can be represented by the pk locally, i.e., for x[0,1]:

1=p1(x)+p0(x)+p1(x)+p2(x),x=p1(x)+p1(x)+2p2(x),x2=32p1(x)13p0(x)+23p1(x)+113p2(x),x3=6p2(x).

This means that we can write the restriction of any polynomial over [0,1] in terms of B(k),k=1,0,1,2:

g|[0,1]=k=12ckB(k)|[0,1].
4.2

The C2 continuity in x=1 means that terms of

g|[1,2]k=12ckB(k)|[1,2]=16c3(x1)3,\ for some c3R.

As the support of B(3) is the interval [1,5] and B(x3)|[1,2]=16(x1)3, we get

g|[0,2]=k=13ckB(k)|[0,2]

and we can proceed inductively interval by interval for the positive x-axis and repeat the procedure for the negative x-axis, looking first at [1,0] and so forth, thus establishing that any g has a representation of the form (4.1).

4.4 Local reproduction of polynomials

As a special case, since all cubic polynomials can of course be seen as piecewise polynomials, where the breakpoints are just not active, we know that we can represent them as a spline series of the form (4.1). In fact, the extension of the local representation of the monomials can be derived inductively for all xR as

1=kZB(xk),x=kZkB(xk),x2=kZ(k213)B(xk),x3=kZ(k3k)B(xk)

Note that the normalization condition B(0)=32 from the B-spline definition is just the right one to produce that the spline B and its integer translates form a partition of unity (4.3).

Recalling that B(k) has support [k2,k+2], the knots in the interior of its support are k1,k,k+1. Why don’t you find out how the coefficients in the expansions of the monomials are related to the symmetric functions? These are given for any three real numbers s1,s2,s3 by

symmj(s1,s2,s3)=1i1<i2<<ij3si1si2sij,\ for j=0,1,2,3.

4.5 Minimal support (Item 2.3)

Since we have now established that each C2 function which is piecewise cubic over the integers can be written in the

series form (4.1), we can show that the function B has indeed smallest support, meaning that there is no such function g0, whose support is properly contained in [2,2]. If we assume that there is such a function, since it is piecewise polynomial over the integers, its support would be contained either in [2,1] or [1,2]. We look at the first case, the other one is dealt with analogously.

According to the basis property, we know that the function g must have a representation of the form (4.1). Since the function g is identically zero on [1,2], arguing in a similar way as for (4.2), we establish that in the series representation c0=c1=c2=c3=0, and use induction to show that ck=0 also for k4. Proceeding similarly for the interval [3,2], we get c4=c3=c2=c1=0, and from there on ck=0 also for k5, establishing that g0. The same argument holds of course to see that the piecewise cubic C2 functions of support [k2,k+2] must have the form ckB(k),ck0, for any kZ.

We are now able to address item 2.3. A function, which is in C2(R), a cubic polynomial on each interval [k,k+1] for kZ, and has smallest support must have the form ckB(k). Since it attains the function value 23 in x=0, it can only be 4B(+1), B() or 4B(+1). Since the function is supposed to be even as well, it must be B itself.

As mentioned before, our argument for the basis property of spline functions is derived from [ 13 ] , where it is actually carried out for any degree and any kind of knot sequence, i.e., the breakpoints are no longer equally spaced and thus the basis functions no longer translates of just one function like our B here. For those who are striving (not always successfully) to publish their results promptly, here is a quote from the introduction of this paper of 1966: The present paper was written in 1945 and completed by 1947 (...) but for no good reason has so far not been published. Those were the days.

4.6 A recursion formula (Item 2.4)

Although I have stated in the introduction that we will restrict ourselves to polynomial degree 3, it is actually essential to see how minimally supported splines of given degree can be recursively put together using minimally supported splines of lower degree. So I am going to introduce a specific piecewise quadratic, linear and constant function now, but without much of an explanation. Find out for yourself about their properties such as the relation of the polynomial degree to the number of nontrivial pieces, positioning of breakpoints and the order of smoothness in these breakpoints. Once you have reached the last section and thus the general definition of a B-spline, you will be able to put things into perspective. Just to confuse you I will index these piecewise polynomial functions by the order of the polynomial pieces, and not by their degree.

We start by setting M4=B as a piecewise cubic function. For piecewise quadratics we define M3 by

M3(x)={12(x+32)2, if x[32,12],x2+34, if x[12,12],12(x32)2, if x[12,32],0, otherwise
4.4

and the piecewise linear M2 (yes, this is actually the hat function N from item 2.10) as

M2(x)={1+x, if x[1,0],1x, if x[0,1],0, otherwise.
4.9

Finally, for piecewise constants we take

M1=χ|[12,12)={1, if x[12,12),0, if x[12,12).
4.13

Now the functions M1(12)=χ|[0,1),M2,M3(12) and M4 are all piecewise polynomials with breakpoints in the integers of increasing polynomial degree, but are they connected in some way? We set tk=k,kZ, and define, following the notation in [ 13 ] ,

M1(x|tk,tk+1)=M1(xk12)=χ|[tk,tk+1),M2(x|tk,tk+1,tk+2)=M2(xk1),M3(x|tk,tk+1,tk+2,tk+3)=M3(xk32),M4(x|tk,tk+1,tk+2,tk+3,tk+4)=M4(xk2).

It is easily verified using item 2.1, (4.8), (4.12) and (4.15), that the following recursion holds for j=2,3,4 and any kZ:

Mj(x|tk,,tk+j)=xtktk+j1tkMj1(x|tk,,tk+j1)+tk+jxtk+jtk+1Mj1(x|tk+1,,tk+j).

This recursion formula can be illustrated in the form of a triangle

M4(x|tk,tk+1,tk+2,tk+3,tk+4)xtktk+3tktk+4xtk+4tk+1M3(x|tk,tk+1,tk+2,tk+3)   M3(x|tk+1,tk+2,tk+3,tk+4)xtktk+2tktk+3xtk+3tk+1↖↗xtk+1tk+3tk+1   tk+4xtk+4tk+2M2(x|tk,tk+1,tk+2)    M2(x|tk+1,tk+2,tk+3)    M2(x|tk+2,tk+3,tk+4)xtktk+1tktk+2xtk+2tk+1↖↗xtk+1tk+2tk+1tk+3xtk+3tk+2↖↗xtk+2tk+3tk+2tk+4xtk+4tk+3χ[tk,tk+1)         χ[tk+1,tk+2)                    χ[tk+2,tk+3)               χ[tk+3,tk+4)

For k=2, substituting expressions for decreasing j and comparing with item 2.2 interval by interval, establishes item 2.4. Of course, just checking the formulae for the polynomial pieces is unsatisfactory, and will not work for arbitrary degree, let alone arbitrary breakpoints, so we are in need of a closed formula expression for the B-splines and we will attack that problem in the subsequent section. For the recursion in the general case, we defer to the final section.

Having a recursion available is very important, for instance, to handle B-splines in practical computations in a numerically stable way. The recursion, as found independently in [ 2 ] and [ 12 ] , is typically considered as the starting point for splines as a practical numerical tool. We will not dwell on this too much here, but give one example, namely the evaluation of the series (4.1).

For x[t,t+1), for any Z, only four terms are relevant due to the compact support of B:

kZckB(xk)=k=3ckB(xk)=c3(x),

where the c3(x) are defined recursively for j=1,2,3, and k=3+j,, as

ckj(x)=tk+4jxtk+4jtkck1j1(x)+xtktk+4jtkckj1(x),

with the initialization

ck0(x)=ck,\ \ for \ k=3,,.

Again, we can illustrate this using a triangular scheme:

c3t+1xt+1txtt+1tc12                          c2t+1xt+1t1xt1t+1t1↖↗t+2xt+2t xtt+2tc21          c11             c1t+1xt+1t2xt2t+1t2↖↗t+2xt+2t1xt1t+1t1↖↗t+3xt+3t xtvt+3tc3              c2                   c1                 c

For a thorough look at triangular schemes for polynomials and splines consult [ 26 ] .

5 Enter: truncated power functions

To derive a closed formula, especially one allowing also the use of nonuniform knots, we start with a different approach. The most straightforward piecewise polynomial functions are the following:

Definition 5.1

For all xR the truncated function is defined as

x+=max(x,0)

and for fixed nN the truncated power function of degree n by

x+n=(x+)n.

These functions consist of just two polynomial pieces glued together at the breakpoint 0, so that the function values of the two pieces match in the knot and also all the corresponding onesided derivatives (the leftsided ones for the left piece which is identically zero and the rightsided ones for the right piece, the polynomial xn) up to the order n1. The n-th derivative then displays a jump discontinuity in the breakpoint. Altogether x+n is a function that is n1 times continuously differentiable on all of R. Clearly the breakpoint can be shifted to any other point by a simple translation, i.e., (xa)+n has its breakpoint in a. A quick quote from [ 3 ] , p. 101: We refuse to worry about the meaning of the symbol (0)+0.

5.1 A closed formula for B-splines (Item 2.5)

It is possible that a linear combination of truncated power functions with different breakpoints, which are of infinite support, can be combined to generate a compactly supported function. In fact, using the definition and investigating the different cases for the respective intervals, we can show that representation 2.5 holds, i.e.,

B(x)=16((2x)+34(1x)+3+6(x)+34(1x)+3+(2x)+3).

At this point it is time to remember a fundamental tool in numerical analysis, typically introduced when studying the Newton form of interpolation polynomials, namely the divided differences.

For a function f the fourth order divided difference in the five distinct points t0,t1,t2,t3,t4 is defined as the leading coefficient of the polynomial of degree 4 that interpolates f in t0,t1,t2,t3,t4, so

[t0,t1,t2,t3,t4]f=k=04f(tk)ω(tk),
5.1

where

ω(t)=(tt0)(tt1)(tt2)(tt3)(tt4).

As is well known, we can compute divided differences for points tk<<tk+ by a recursion:

[tk]f=f(tk), for  =0

and

[tk,,tk+]f=[tk+1,,tk+]f[tk,,tk+1]ftk+tk,for 1.
5.2

Defining the function G(t,x)=(tx)+3, we can establish that in fact

B(x)=4[2,1,0,1,2]tG(t,x)=4[2,1,0,1,2]t(tx)+3,
5.3

so that the cubic B-spline B for the knots 2,1,0,1,2 is generated by applying the fourth order divided difference for these knots to the functions G with respect to the variable t and multiplying by the difference last knot minus first knot. The equation is translation invariant in the sense that

B(xk)=4[k2,k1,k,k+1,k+2]t(tx)+3.
5.4

5.2 Green’s functions (Item 2.6)

Now we proceed to investigate other settings, where the truncated power functions and thus B-splines as their divided differences are of importance.

Definition 5.2

For the linear differential operator

L[y]=y(4)+a3(x)y(3)+a2(x)y(2)+a1(x)y(1)+a0(x)y
5.5

and initial conditions

y(3)(a)=g3,y(2)(a)=g2,y(1)(a)=g1,y(a)=g0
5.6

a bivariate function GL(x,t) defined on a bivariate interval [a,b]2 is called Green’s function for the differential operator (5.5) with given initial conditions (5.6), if the unique solution to the initial value problem (IVP) given by

L[y]=h,\quad almost everywhere

for any right hand side function h that is integrable on the interval [a,b] and arbitrary initial condition (5.6) can be expressed as

yp(x)+axGL(x,t)h(t)dt,

where yp is the solution of the homogeneous IVP L[y]=0 under the given initial conditions, while the integral term solves the equation L[y]=h under homogeneous initial conditions.

We can now compute directly that for the simple operator of fourth order differentiation L=D4, i.e., L[y]=y(4) we obtain the fourth-order Taylor polynomial in a as yp4

yp(x)=k=03gk(xa)kk!.

Differentiating under the integral sign yields

Dx3(ab16(xt)+3h(t)dt)=ab(xt)+0h(t)dt=axh(t)dt,Dx4(ab16(xt)+3h(t)dt)=h(x)

and thus according to Definition 4

GD4(x,t)=16(xt)+3,

yielding as the solution of the IVP on [a,b]

j=03gj(xa)jj!+ab16(xt)+3h(t)dt.

This holds true also for other spline degrees n, the differential operator Dn+1, and the truncated power function 1n!(xt)+n as the corresponding Green’s function, see [ 5 ] . Note that we have used (xt)+3 here, while we use (tx)+3 in the B-spline definition. We have, however, that

(xt)+3=(xt)3+(tx)+3,\ for all x,tR,

and since the fourth order divided difference annihilates polynomials up to degree 3

[2,1,0,1,2]t(tx)+3=[2,1,0,1,2]t(xt)+3
5.7

and thus item 2.6, if we choose specifically [a,b]=[2,2].

5.3 Peano kernels (Item 2.7)

The annihilation property of divided differences comes again into play in a special case of a remainder theorem originally established by Peano (see [ 15 ] ). In my opinion this a really powerful theorem as it yields remainder terms for various types of interpolation, numerical integration and differencing. For cubic polynomials this allows – for example – to state remainder formulae for Lagrange interpolation in four points, for the cubic Taylor polynomial in one given point, or for Simpson’s rule for numerical quadrature (try and see what you obtain).

Theorem 5.3

Let a linear functional on the function space C4[a,b] be given in the form

L(h)=ab(a3(t)h(3)(t)+a2(t)h(2)(t)+a1(t)h(1)(t)+a0(t)h(t))dt+j=1j0bj,0h(tj,0)+j=1j1bj,1h(1)(tj,1)+j=1j2bj,2h(2)(tj,2)+j=1j3bj,3h(3)(tj,3),

where the functions aj are assumed to be piecewise continuous over [a,b] the points tj,k lie in [a,b] while all jk are in N and all bj,k are real numbers. If the functional L annihilates polynomials of degree 3, i.e.,

L(p)=0,\ for pΠ3,

then we have for hC4[a,b] the representation

L(f)=abh(4)(x)K(x)dx

with

K(x)=13!Lt((xt)+3),

meaning that the functional L is applied to the truncated power function as a function of t, treating x as a free parameter. The function K is called the Peano kernel of the functional L.

This remainder theorem also covers our divided differences of fourth order, since they annihilate cubics, and we obtain for all hC4[a,b]

[2,1,0,1,2]h=13!22h(4)(x)[2,1,0,1,2]t(tx)+3dx=14!22h(4)(x)B(x)dx,

so the cubic B-spline B is – up to a factor – the Peano kernel for the divided difference functional, and this holds also for other degrees and nonuniform knots.

Note that we obtain from here the definite integral of the cubic B-spline By choosing h(x)=x4, so that

+B(x)dx=1.
5.9

Finally, we point out that for equally spaced points, there is of course a direct correspondence between divided differences and central differences. Over the integers we define the central difference

Δh(x)=h(x+12)h(x12)

and iteratively

Δnh(x)=Δn1(Δh(x)).

We thus compute that

Δ4h(0)=h(2)4h(1)+6h(0)4h(1)+h(2).

On the other hand according to the recursive computation of the divided differences (5.2), we have

[2,1,0,1,2]h=14!(h(2)4h(1)+6h(0)4h(1)+h(2))

and thus (recalling (5.8) we obtain item 2.7

h(4)(x)B(x)dx=Δ4h(0).

5.4 A geometric interpretation (Item 2.8)

The Peano kernel property of the B-spline B and an integral representation of divided differences (again valid for any choice of simple knots) give rise to our first geometric interpretation of B. Using again the recursion for divided differences (5.2), it is possible to establish the Hermite-Genocchi integral representation of a divided difference for a smooth function hC4[x0,x4]

[x0,x1,x2,x3,x4]h==Σh(4)(x0s0+x1s1+x2s2+x3s3+x4s4)ds1ds2ds3ds4,

where the domain of integration is the simplex Σ in R4 given as

{(s1,s2,s3,s4):s10,s20,s30,s40,s0=1s1s2s3s40}.

A straightforward proof for general order divided differences is of course done by induction over the number of points xj.

We now position a unit-volume simplex Σ in R4, so that its five vertices under orthogonal projection to the x-axis are mapped onto the points (xj,0,0,0)T. There are a lot of possibilities for this, a simple one being the vertices

X0=(x0,0,0,0)T, X1=(x1,0,0,0)T, X2=(x2,2x1x0,0,0)T,X3=(x4,0,3,0)T, X4=(x0,0,0,4)T.

Regardless of the actual choice of Xj=(xj,yj,zj,uj)T, as long as 4!vol4(Σ)=det(X1X0,X2X0,X3X0,X4X0)=4!, we make a change of variables

x=x0(1s1s2s3s4)+x1s1+x2s2+x3s3+x4s4,y=y0(1s1s2s3s4)+y1s1+y2s2+y3s3+y4s4,z=z0(1s1s2s3s4)+z1s1+z2s2+z3s3+z4s4,u=u0(1s1s2s3s4)+u1s1+u2s2+u3s3+u4s4.

The determinant of the Jacobian (x,y,z,u)(s1,s2,s3,s4) needed for the corresponding integral transformation is det(X1X0,X2X0,X3X0,X4X0)=4!, so that we obtain by the Hermite-Genocchi formula (5.10)

4![x0,x1,x2,x3,x4]h==Σh(4)(x(s1,s2,s3,s4))|(x,y,z,u)(s1,s2,s3,s4)|ds1ds2ds3ds4=Σh(4)(x)dxdydzdu=x0x4h(4)(x)((x,y,z,u)Σ1dydzdu)dx

The interior triple integral is now indeed the 3-dimensional volume of the intersection of the hyperplane given by x=xf, with xf fixed between x0 and x4, and the simplex Σ. On the other hand, by (5.8), it must be the B-spline B as the Peano kernel of the divided difference, since the equations hold for all functions hC4[a,b], yielding

B(x)=(x,y,z,u)Σ1dydzdu,

and thus item 2.8. Note how the volume is always the same, regardless of how the simplex is chosen, provided the first coordinate of each Xj is xj and the volume of Σ is 1, a nice illustration of Cavalieri’s principle.

6 Fourier transforms

We will now look at situations, where the function B arises specifically because its breakpoints lie not just anywhere, but in the integers. Schoenberg called this the cardinal case, see his monograph [ 40 ] . For background material on Fourier transforms, used here without proof, consult a textbook on Fourier analysis. I for myself used the formulations from Chapter 2 in [ 11 ] . Note that i everywhere in this text denotes the imaginary unit, it is never used as an index.

Recall first that the Fourier transform of a function gL1(R) is defined as

g^(ω)=(Fg)(ω)=g(x)eiωxdx

and the inverse Fourier transform of g^L1(R) as

(F1g^)(x)=12πg^(ω)eiωxdω.

Next recall that, provided both g and g^L1(R), at every point x where g is continuous

g(x)=(F1g^)(x).
6.1

Let g,hL1(R), then the convolution gh is also in L1(R), where

(gh)(x)=g(xt)h(t)dt.

For the Fourier transform of a convolution we have

(F(gh))(ω)=(Fg)(ω)(Fh)(ω).
6.2

Recall now that the sinc function is defined as

sinc(x)={sinxx if x0,1 if x=0.

It turns out that the sinc function is closely related to piecewise polynomials through the Fourier transform.

A fundamental result for practical applications is the Sampling Theorem. The theorem deals with functions that are bandlimited, i.e. whose Fourier transform is nonzero only on an interval of finite length, say Fg(ω)=0 for |ω|>L. In this case the whole function (signal in engineering terms) can be reconstructed from countably many discrete samples, namely the function values g(kπ/L) for kZ, as

g(x)=k=g(kπL)sinc(L(xkπL)).
6.5

Compare (6.5), where a bandlimited function is represented using the sinc function and its translates, to (4.1), where a piecewise polynomial over the integers is represented by the B-spline B and its integer translates. While the convergence in (6.5) is very slow, evaluation is quickly done in (4.1) due to compact support and the recursion formula. The coefficients in (4.1), however, are no simple samples of the given spline function. In Schoenberg’s already mentioned first spline paper in 1946 [ 38 ] , the series (6.5) was the starting point for his studies of spline series like (4.1).

6.1 The Fourier transform of the B-spline B (Item 2.9)

Let us start by computing the Fourier transform of our cubic B-spline B. Splitting up the complex-valued function hω(x)=eiωx into its real and imaginary part, we find that we can actually use the Peano kernel property (5.8) and item 2.7 also for this function to obtain

ω4eiωxB(x)dx=Δ4hω(0).

We have

Δhω(x)=eiωx(x+12)eiω(x12)=2isin(ω2)eiωx

and thus

Δ4hω(x)=(2i)4sin4(ω2)eiωx=24sin4(ω2)eiωx

and

Δ4hω(0)=24sin4(ω2).

This way we obtain that the Fourier transform of B is a power of a dilated sinc function:

(FB)(ω)=1ω4Δ4hω(0)=(sin(ω2)ω2)4=sinc4(ω2).

Since B is continuous, due to (6.1), taking the inverse Fourier transform yields item 2.9.

B(x)=F1(sinc4(2))(x)=12π+(sin(ω2)ω2)4eiωxdω.
6.6

6.2 Convolutions (Item 2.10)

The occurrence of the 4-th power of the sinc-function for a cubic B-spline (order = 4) is of course no accident. Starting with piecewise constants, namely the characteristic function M1 of the halfopen interval [12,12) already presented in (4.15), i.e.,

M1(x)={1, if\ x[12,12),0, if x[12,12),

we define recursively for n2, using convolution,

Mn(x)=(Mn1M1)(x)=Mn1(xt)M1(t)dt=1212Mn1(xt)dt=x12x+12Mn1(t)dt,for all xR.

Check that this recursive definition is actually consistent with the piecewise definitions in (4.12) and (4.8).

Since the Fourier transform of M1 is

(FM1)(ω)=M1(x)eiωxdx=1212eiωxdx=sinω2ω2,

the Fourier transforms of Mn are due to convolution property (6.2)

(FMn)(ω)=(FM1)n(ω)=(sinω2ω2)n.

While M1 still has two jumps, we see from (6.9) that the smoothness of Mn increases by one with each convolution, i.e. MnCn2(R). Thus taking the inverse transform establishes for n2

Mn(x)=12π+(sin(ω2)ω2)neiωxdω,

implying of course M4=B.

Since the sinc-function is even, we can remove the complex exponential in the integrand and write

Mn(x)=1π0+(sin(ω2)ω2)ncos(xω)dω.
6.10

Integrals of this form have been investigated for a long time by a lot of mathematicians independent of spline theory, see [ 7 ] for a historical review.

We have already seen in (4.12) that for the hat function N we have N=M2. This establishes, as the convolution is associative, that

B(x)=(((M1M1)M1)M1)(x)=((M1M1)(M1M1))(x)=(NN)(x).

producing item 2.10.

6.3 A moving unit impulse (Item 2.11)

In [ 28 ] it is described how B-splines can be generated recursively by intersecting the graph of a lower order B-spline with a unit impulse moving along the coordinate axis, as derived in [ 36 ] . In more explicit terms this would mean the following.

Moving a unit square along the x-axis with its center at the point (x,12) means that its four corners lie at (x12,0), (x+12,0), (x+12,1), (x12,1) for a given xR. The area A(x) of the region cut out of the square by the graph of the hat function N(x) is then nothing but the definite integral of the hat function between the limits of integration x12 and x+12:

A(x)=x12x+12N(t)dt=M3(x).

Thus a repetition of the process using A(x) instead of N(x) produces item 2.11:

x12x+12A(t)dt=x12x+12M3(t)dt=M4(x)=B(x).

7 Subdivision

7.1 The refinement equation (Item 2.12)

A simple dilation of the B-spline series (4.1) by a factor of 2 yields that any piecewise cubic polynomial function over the half integers 12Z that lies in C2(R) can be represented by the sum

kZckB(2k).

Clearly the B-spline B qualifies as such a function, the knots 12+Z are just not active as B is a piecewise function over the knots Z. So there must be coefficients pk, called two-scale coefficients, such that

B(x)=kZpkB(2xk).
7.1

The specific functional equation for the refinement of B – as given in item 2.12 – can now be established using the Fourier transform of B, indicating also the corresponding ones for all the functions Mn.

Note that in 2.12 we can compute the values of any solution function f in the points 12+Z from the prescribed values in Z using the functional equation. Repeating this process, the values in 14+12Z can be computed from those in 12+Z and so forth, so that the values of f are established for all arguments of the form k2r. The continuity of f takes care of the remaining rational numbers and all irrational ones, ensuring that there can be only one continuous solution function of the functional equation with the prescribed function values in the integers.

So we take the Fourier transforms in (7.1) to obtain

(FB)(ω)=F(kZpkB(2k))(ω).

A dilation by the factor 2 in the function results in a factor of 1/2 in the Fourier transform, the translation by an integer k in an additional factor of eikω2, yielding in total

(FB)(ω)=12(kZpkeikω2)(FB)(ω2).

Denoting z=eiω2, the function

P(z)=kZpkzk

is called the symbol of the refinable function and its investigation is one of the cornerstones in subdivision theory (see for example [ 8 ] and the tutorial [ 20 ] ) and wavelet analysis (see for example [ 14 ] , [ 11 ] ) to derive properties of the function obeying a refinement equation with given two-scale coefficients.

Our task here is much easier since we have already computed the Fourier transform of B before and can just derive P by division, i.e.,

P(z)=2(FB)(ω)(FB)(ω2)=123(sinω2sinω4)4=123(eiω2eiω2eiω4eiω4)4=123(eiω41eiω1eiω2)4=123z2(1z21z)4=123z2(1+z)4=123k=04(4k)zk2.

Thus the two-scale coefficients for B are

pk={123(4k+2), for k=2,,2,0, for |k|3.

Note that the famous Daubechies functions [ 14 ] are derived from splines by starting from the spline symbol P(z) and multiplying by another term S(z) chosen to achieve orthogonality of the translates of the refinable function.

7.2 Subdivision curves (Item 2.13)

How are the coefficients related when going from the piecewise cubics over the integers Z to the half integers 12Z? In other words given a function

g(x)=kZck0B(xk),

we know that g can also be written as

g(x)=kZck1B(2xk),

so the question is what the relationship is between the coefficients ck0 and ck1. Using the coefficients of the symbol P, also called the mask coefficients, we find with item 2.12 that

g(x)=kZck0B(xk)=kZck0ZpB(2x2k)=kZck0Zp2kB(2x)=ZkZck0p2kB(2x).

Since the representations are unique, we obtain

c1=kZck0p2k,for all Z.

Splitting up in even and odd indices and using the fact that only five coefficients pk are nonzero results in

c21=kZck0p22k=18c10+68c0+18c+10,c2+11=kZck0p2+12k=48c0+48c+10.

Clearly we can iterate this procedure and obtain

g(x)=kZckrB(2rxk),

with

c2r=kZckr1p22k=18c1r1+68cr1+18c+1r1,c2+1r=kZckr1p2+12k=48cr1+48c+1r1.

A straightforward way to approximate the graph of the function, i.e. the set of pairs (x,g(x)) for all xR, is to draw the polygon connecting the points Pk0=(k,ck0) k for all kZ instead. Note that the polygon does not interpolate, since we take the k-th coefficient of the B-spline expansion as the second component, not the function value of g at point k.

A finer polygon and hopefully better approximation is then obtained by connecting the points P2k1=(k,c2k1) for all k12Z and so forth for all rN: P2rkr=(k,c2rkr) for all k12rZ. We contend that the polygons actually converge to the graph of the function. Indeed we have for all kZ

(k2r,g(k2r))Pkr=(k2r,g(k2r))(k2r,ckr)=|g(k2r)ckr|.

Using the r-th level representation of g we find

g(k2r)=ZcrB(k)=16ck1r+32ckr+16ck+1r

and thus

g(k2r)ckr=16(ck1rckr)+16(ck+1rckr).

Using the difference operator for biinfinite sequences c={c}Z defined by (Δc)=c+1c, we thus get

(k2r,g(k2r))Pkr13Δcr.

It now suffices to investigate how the differences of consecutive levels are related. We have

|c2+1rc2r|=|(48cr1+48c+1r1)(18c1r1+68cr1+18c+1r1)|=|38c+1r114cr118c1r1|38|c+1r1cr1|+18|cr1c1r1|12Δcr1

and

|c2rc21r|=|(18c1r1+68cr1+18c+1r1)(48c1r1+48cr1)|=|18c+1r1+14cr138c1r1|18|c+1r1cr1|+38|cr1c1r1|12Δcr1,

obtaining

Δcr12Δcr1

and thus by iteration

(k2r,g(k2r))Pkr1312rΔc.

We have thus established that for a given fraction k2r1, the coefficients ck2rr1r converge to g(k2r1) for r, and the continuity of g takes care that the sequence of polygons converge to the graph of g as a whole. In item 2.13 we have specifically chosen the initial coefficients ck0=δk,0, and thus g=B, so that the polygons converge to the graph of B itself.

The study of subdivision curves in Computer Aided Geometric Design started with Chaikin [ 9 ] , but the idea dates back to de Rham in 1947 [ 34 ] . In the original process, called corner cutting, the rules for the coefficients, and thus the corresponding rules for the control points P2r and P2+1r, are not given by (7.4) and (7.5), but instead by

22r=34cr1+14c+1r1,c2+1r=14cr1+34c+1r1.

Compute for a given initial polygon a few steps of the algorithm and draw the resulting polygons to see why the procedure is called corner cutting.

While in our notation and terminology it is fairly easy to see that the coefficients involved are in fact the two-scale coefficients of the piecewise quadratic B-spline M3, and thus the corresponding polygons converge to the piecewise quadratic function kck0M3(xk), this was not known when the algorithm was first described.

Subdivision procedures are also available for surfaces, and even more important there. I would like to refer those who are interested to find out more to the tutorials by Sabin [ 37 ] . At this point let me just mention that these surfaces are now the state of the art in computer animation movies [ 16 ] .

8 Probability

8.1 An urn model (Item 2.14)

In Subsection 7.1 we have encountered binomial coefficients in the two-scale coefficients of B-splines. Binomial coefficients can be derived from Pascal’s triangle, which is related to urn models in probability. So it should not come as a great surprise that we can link the B-spline to urn models as well.

We consider an urn model introduced in very general form already by Friedman [ 24 ] . Initially the urn contains w white balls and b black balls. One ball at a time is drawn at random from the urn and its color is inspected. It is then returned to the urn and a constant number c1 of balls of the same color and a constant number c2 of balls of the opposite color are added to the urn.

The aim is to study the probability of selecting exactly K white balls in the first N trials. It turns out (see Goldman [ 25 ] , where this material is taken from) that this probability can be described in dependence on three parameters: the probability of selecting a white ball on the first trial and the percentages of balls of the same and of the opposite color added to the urn after the first trial.

Using the notation

t=ωω+b

for the probability of selecting a white ball on the first trial, we use the term BKN(t) for the probability of selecting exactly K white balls in the first N trials.

For the simplest case of c1=c2=0 we obtain the binomial distribution for sampling with replacement, i.e.

BKN(t)=(NK)tK(1t)NK,t[0,1].

The case c10, c2=0 corresponds to classical Polya-Eggenberger models [ 21 ] .

The urn models for the cases c2=ω+b+c1 are called spline (!) models,and we will just look at the simplest case c1=0, c2=ω+b. This means that the number of balls after the first N trials is always (N+1)(w+b), but the number of white and black balls is determined by the outcome of those trials.

We introduce two more probabilities (dependent on t):fKN(t) as the probability of selecting a black ball after selecting exactly K white balls in the first N trials and sKN(t) as the probability of selecting a white ball after selecting exactly K white balls in the first N trials.

We can now set up a recursion for the terms BKN(t). After the trial number N+1, we can only have K white balls in two mutually exclusive cases:

  • if we already have drawn K white balls after trial N and we draw a black ball in the last trial,

  • if we have K1 white balls after trial N and draw a white ball in the last trial.

In probability terms this means

BKN+1(t)=fKN(t)BKN(t)+sK1N(t)BK1N(t).

In case i) the number of black balls after the first N trials is b+K(w+b), while in case ii) the number of white balls after the first N trials is w+(NK+1)(w+b). Thus we obtain

fKN(t)=b+K(ω+b)(N+1)(ω+b)=1t+KN+1,sK1N(t)=ω+(NK+1)(ω+b)(N+1)(ω+b)=t+NK+1N+1

and the specific recursion for K=0,,n

Bkn+1(t)=1t+KN+1BKN(t)+t+N+1KN+1BK1N(t).
8.1

The initial conditions are

B01(t)=1t,B11(t)=t,

and we see that we get hat function as

N(x)=M2(x)={B11(x+1), if 1x<0,B01(x),       if 0x<1,0,               if|x|1.

By induction we can derive a closed formula for the functions BKN based on the recursion (8.1), namely

BKN(t)=1N!j=0NK(1)j(N+1j)(t+NKj)N.
8.2

We see that the formula is correct for N=1. By the recursion (8.1) and the induction hypothesis, we get

BKN+1(t)=1t+KN+1BKN(t)+1+N+1KN+1BK1N(t)=1(N+1)!j=1N+1K(1)j1(N+1j1)(t+N+1Kj)N(1t+K)+1(N+1)!j=0N+1K(1)j(N+1j)(t+N+1Kj)N(t+N+1K)=1(N+1)!j=0N+1K(1)j(t+N+1Kj)NR(t)

with

R(t)=(N+1j1)(1t+K)+(N+1j)(t+N+1K)=(N+1)!j!(N+2j)!((N+2j)(t+N+1K)j(1t+K))=(N+1)!j!(N+2j)!((N+2)(t+N+1Kj))=(N+2j)(t+N+1Kj),

establishing the desired formula (8.2).

We now define a piecewise polynomial function UN by setting for 0KN and x[K(N+1)/2,K+1(N+1)/2]

UN(x)=BNKN(x+N+12K)=1N!j=0K(1)j(N+1j)(x+N+12j)N.

Invoking the truncated power function x+, we obtain, similar to (5.7), for all xR

UN(x)=1N!j=0N+1(1)j(N+1j)(x+N+12j)+N=1N!j=0N+1(1)N+1j(N+1j)(jN+12x)+N,

yielding U3=B and item 2.14 for N=3.

8.2 A probability density function (Item 2.15)

In subsection 4.2 and 5.3 we have observed the following two properties of the spline B:

B(x)0\ for all xR and B(x)dx=1.

Thus the function B qualifies as a probability density function in the sense that we can investigate a real random variable T for which the probability P that an observed value of T lies between a and b is given by

P(aTb)=abB(T)dT.

Let us look a little bit more into this. Note first that M1 also satisfies

M1(x)0\ for all xRand +M1(t)dt=1,

and thus can be interpreted as well as a probability density function of a real random variable X1.

The convolution property (6.2) yields that B=M4 is the probability density function of the sum Y=X1+X2+X3+X4, where each independent real random variable Xj has the density function M1. So what kind of experiment does this density function M1 represent, producing

P(<X112)=0,P(12X1<)=0,P(aX1b)=ba,\ if [a,b][12,12]?

In fact, when rounding, let for any xR denote r(x)Z the integer which closest to x, i.e., using the largest integer function [[]]:

r(x)=[[x+12]].

This implies of course that

r(x)x[12,12)

is the error when replacing the value x by its closest integer r(x), resolving item 2.15. This interpretation was given by Schoenberg in 1946 [ 38 ] , but it turns out (see [ 7 ] for a historical account) that such probability distributions were studied by a number of people, resulting in approaches quite different from the ones we have taken so far and which we will examine in the following.

8.3 Sommerfeld’s approach in 1904 (Items 2.16 and 2.17)

Starting from the sum j=1nXj , where each real random variable Xj represents the error when replacing a real number by its closest integer, Sommerfeld in 1904 [ 43 ] wanted to investigate the limit behaviour if the number n of terms goes to infinity. We will have a look how he set out to compute the probability density functions explicitly for the first few values of n. We will refer to the corresponding probability density functions as Mn and follow Sommerfeld’s approach, thus ignoring the closed formulae we are able to derive by convolution.

\includegraphics{figpage36.png}
Figure 3 M2- the piecewise linear case for X1+X2.

We have already covered M1, so we can continue with M2, i.e., the sum X1+X2. Figure 3 is taken directly from Sommerfeld’s paper. Working in R2 with axes x1 and x2, all possible outcomes for X1 and X2 are represented by pairs of numbers lying in a unit square centered at the origin, i.e., Q2=[12,12]×[12,12]. All outcomes (x1,x2) resulting in the same sum x lie on the line x1+x2=x . Anyway, for small dx>0 (the negative case is handled analogously) we are looking for the area of the region cut out of the square Q2 by the strip S(x,x+dx) between the two lines x1+x2=x and x1+x2=x+dx in the sense that

P(xX1+X2x+dx)=xx+dxM2(t)dt=area(Q2S2(x,x+dx)).

The width of the strip S(x,x+dx) as the distance between the two lines is dx/2, so that

area(Q2S2(x,x+dx))=12(S2(x)+S2(x+dx))dx2,

where S2(x) denotes the length of that part of the line x1+x2=x which lies within the square Q2, since Q2S2(x,x+dx) is a trapezoid.

We obtain therefore that

M2(x)=limdx01dxxx+dxM2(t)dt=limdx012(S2(x)+S2(x+dx))12=S2(x)2.

Thus our task is now reduced to finding a formula for S2(x). The line intersects the square iff 1x<1, so S2(x)=0 for |x|1: To simplify we introduce u=1+x, so that we still have to investigate the case 0u<2. Note that u is the distance from the corner of (12,12) of Q2 to the point (x+12,12), where the line x1+x2=x intersects the line containing that side of Q2, which originates at (12,12) and is parallel to the x1-axis (the same for (12,x+12) and the side parallel to the x2-axis).

For 0u<1 the point of intersection lies within Q2 and so the right triangle with the vertices (12,12),(x+12,12), (12,x+12) has a hypotenuse of length S2(x) and the other two sides are both of length u. Thus by Pythagoras, since S2(x) is nonnegative:

(S2(x))2=2u2\ or\ S2(x)=2u=2(1+x)\ for 1x<0.

For 1u<2 the intersection point(x+12,12) lies outside of Q2, so the length of the hypotenuse 2u of the triangle with vertices (12,12), (x+12,12) and (12,x+12) is in fact too long and we must subtract the length of the hypotenuses of the two small right triangles, one with vertices (12,12),(12,x+12) and (x12,12), the other with vertices (12,12), (x+12,12) and (12,x+12), whose other sides have both length u1. This yields

S2(x)=2u22(u1)=2((1+x)2x)=2(1x),  for 0x<1,

so that we obtain, just as we should, M2(x)=S2(x)2=N(x).

We proceed now to M3 and the sum X1+X2+X3. Again Figure 4 is the original one from Sommerfeld’s paper. Now the outcomes lie in the unit cube Q3=[12,12)×[12,12)×[12,12) and all outcomes with the same sum x on the plane x1+x2+x3=x: Arguing as before for two variables, this time with the strip between the two planes for x and x+dx, we find that the strip has width dx/3 and

M3(x)=S3(x)3,

where S3(x) is the area of the piece of the plane x1+x2+x3=x that lies within the cube Q3. This time we have intersection iff 32x<32, so S3(x)=0 for |x|32. The substitution u=32+x introduces again the length of the (now three) segments from the corner (12,12,12) to the intersections of the lines containing a side of the cube originating at this corner with the plane x1+x2+x3=x.

\includegraphics[scale=1.2]{figpage38.jpg}
Figure 4 M3 - the piecewise quadratic case for X1+X2+X3.

For 0u<1 these intersection points (x+1,12,12),(12,x+1,12) and (12,12,x+1) lie within the cube and S3(x) is the area of the equilateral triangle with these three vertices, so that its side length is 2u and thus

S3(x)=32u2,for 0u<1.

For 1u<2, the intersection is a hexagon, obtained from the triangle with vertices (x+1,12,12),(12,x+1,12),(12,12,x+1) by removing small equilateral triangles of length 2(u1) at the corner, for example the one with the vertices (x+1,12,12),(12,x,12),(12,12,x), yielding

S3(x)=32(u23(u1)2),for\ 1u<2.

For 2u<3, the intersection is again a triangle. Instead of computing its area directly, we observe that by taking away the triangles as we did for 1u<2, we are now removing too much and have to add in again 3 triangles at each corner of side length 2(u2) to obtain

S3(x)=32(u23(u1)2+3(u2)2),\ for2u<3.

Finally, we consider M3 and the sum X1+X2+X3+X4. We find

M4(x)=S4(x)4,

where S4(x) is the (3-dimensional) volume of the piece of the hyperplane x1+x2+x3+x4=x that lies within the (4-dimensional) cube Q4=[12,12)4. With the substitution u=2+x, we obtain a regular tetrahedron as the intersection for 0u<1, yielding

S4(x)=46u3,for 0u<1.

For 1u<2, we obtain an octahedron, resulting from cutting four small tetrahedra at the 4 corners of the original tetrahedron, in order to get

S4(x)=46(u34(u1)3),\ for 1u<2.

For 2u<3, we still obtain an octahedron, but the four tetrahedra added in the previous step are overlapping now. As a compensation we have to add tetrahedra again, one for each edge of the original tetrahedron, i.e. a total of six to obtain

S4(x)=46(u34(u1)3+6(u2)3),\ for 2u<3.

Finally, for 3u<4 the intersection is a tetrahedron again, and we need to take away again a tetrahedron for each face of the original tetrahedron, so that

S4(x)=46(u34(u1)3+6(u2)34(u3)3),\ for 3u<4.

A quick check shows that the function derived as M4 is indeed the originally introduced M4 and thus B, establishing representation 2.16.

Sommerfeld was the first one (as far as I know) to present a graph of B alias M4, together with those for M1,M2 and M3 in his original paper from 1904, which is given in Figure 5, resolving item 2.17.

Note also that the formulae given for the pieces of the function S4 are divided differences of the truncated power functions

S4(x)=46[0,1,2,3,4]t(ut)+3,

yielding also

S4(x)=46(u34(u1)3+6(u2)34(u3)3+(u4)3)=46[0,1,2,3,4]t(ut)+346[0,1,2,3,4]t(ut)3=0,\ for u4.

We will not sketch here how Sommerfeld pursued his original goal of investigating the limit behaviour, let it suffice to remark that indeed properly normalized B-splines tend to Gaussian (i.e., exponential) functions, if we let the polynomial degree go to infinity. Since Sommerfeld did not give a closed formula expression for his probability density functions Mn, there were further attempts to shed some light on this problem, which did not become so widely known.

\includegraphics[scale=0.37]{fig_4_1.jpg} \includegraphics[scale=0.37]{fig_4_2.jpg}

\includegraphics[scale=0.37]{fig_4_3.jpg} \includegraphics[scale=0.37]{fig_4_4.jpg}

\includegraphics[scale=0.38]{fig_4_5.jpg}
Figure 5 Sommerfeld’s graphs for M1,M2,M3,M4=B and the limit for n.

9 And something from Norway, too !

9.1 Brun 1932 (Item 2.18)

This section is devoted to the efforts of two Norwegian mathematicians, whose contributions and biographies are described in [ 7 ] . The first one is V. Brun, who is also the man who rediscovered Abel’s long lost original manuscript on transcendental functions, which Abel had submitted to the Academy in Paris in 1826. The text was finally printed in 1841, but the original was considered lost until Brun rediscovered it in Firenze in 1952. There were 8 pages missing, however, but that is another story, so back to our subject.

In his paper [ 6 ] in 1932, Brun had another look at the problem of the density functions Mn as considered by Sommerfeld. He had the idea of relating these functions by an integral recursion corresponding to the convolution (6.9). He then found the following representation for M1: introducing the jump function

Q(x)=limλ0+arctan(xλ)={π2 if x>0,0 if x=0,π2 if x<0,
9.1

we have

ΔQ(x)=Q(x+12)Q(x12)={π      if    12<x<12,0      if     |x|>12,π2     if       x=±12,

so that 1πΔQ and M1 only differ in the two points ±1/2, which is irrelevant for integration, yielding

1πx12x+12ΔQ(t)dt=x12x+12M1(t)dt=M2(x),

and thus by iteration

1πx12x+12t312t3+12t212t2+12ΔQ(t1)dt1dt2dt3=M4(x),

i.e. representation 2.18. Brun then used 1πΔQ and the integral recursion to establish the functions Mn in the integral form (6.10), and gave some quadrature formulae for the integration.

9.2 Fjelstaad 1937 (Items 2.19 and 2.20)

In his paper [ 6 ] , Brun stated: Vi ser herav det hpløse i fremstille φn(x) efter potenser av x (We see from here how hopeless it is to represent φn(x) [his notation for Mn(2x)] in powers of x). According to the introduction in [ 23 ] , this spurred on another Norwegian, J. E. Fjelstaad, to derive in fact such a representation from the integral form (6.10), bringing us to the end of our cubic B-spline tour.

Replacing x by x/2 we obtain from (6.10)

Mn(x2)=1π0+(sin(ω2)ω2)ncos(xω2)dω=2π0+(sin(ω)ω)ncos(xω)dω.

Integration by parts yields then formula

13!0+euωu3du=1ω4,

so we obtain the double integral formula

M4(x2)=23!π0+(0+euωu3du)sin4ωcos(xω)dω=23!π0+u3(0+euωsin4ωcos(xω)dω)du.

We compute that for the integral

I4=0+e(uix)ωsin4ωdω

we have

ReI4=0+euωsin4ωcos(xω)dω.

One integration by parts yields

I4=4342+(uix)20+e(uix)ωsin2ωdω

and yet another one

I4=4342+(uix)22122+(uix)20+e(uix)ωdω=4(42+(uix)2)(22+(uix)2)(uix).

Thus we obtain

M4(x2)=8πRe0+u3(42+(uix)2)(22+(uix)2)(uix)du,

establishing item 2.19.

From here we want to proceed by using partial fractions. The zeros in the denominator are i(x+42k), k=0,1,2,3,4 and we have

u3\bigskip(42+(uix)2)(22+(uix)2)(uix)=k=04Akui(x+42k),

where the factors Ak can be computed to be

Ak=i3(x+42k)3p=0,pk4(i(42k)i(42p))=i(x+42k)3p=0,pk4(2(pk))=i(x+42k)3(1)p24k!(4k)!.

Since

k=04Akui(x+42k)=k=04Ak(u+i(x+42k))u2+(x+42k)2,

we have

Rek=04Akui(x+42k)=k=04(x+42k)3(1)p24k!(4k)!(x+42k)u2+(x+42k)2.

Integration from 0 to z yields

0zRek=04Akui(x+42k)du=k=04(x+42k)3(1)k24k!(4k)!arctan(2x+42k).

Earlier we have already used limit properties of arctan in (9.4), here we see that

limzarctan(zx+42k)={π2,  if x+42k>0,π2, if x+42k<0,

so we finally establish

Re0+k=04Akui(x+42k)du==π25k=04|x+42k|3(1)kk!(4k)!=π254!k=04(1)k(4k)|x+42k|3

and

M4(x2)=13!124k=04(1)k(4k)|x+42k|3

or

M4(x)=13!124k=04(1)k(4k)|2x+42k|3=13!12k=04(1)k(4k)|x+2k|3,

establishing item 2.20. In the general case - which is of course the one considered in [ 23 ] , the signum function appears for odd polynomial orders (even degrees) in the sense that

Mn(x)=1(n1)!12k=0n(1)k(nk)|x+n2k|n1sign(x+n2k)n.

10 An Outlook

A good way to wrap up our treatment now is to consider the general definition of a B-spline of arbitrary degree and make some comments in light of the 20 representations we have covered. For a more detailed introduction with examples and references I would like to refer to a recent tutorial [ 33 ] , standard spline monographs such as [ 3 ] , [ 42 ] and for the cardinal setting [ 40 ] , and specifically for CAGD [ 22 ] , [ 28 ] .

Schoenberg wrote his book [ 40 ] mainly about interpolation of infinite data by spline series of the form (4.1) for arbitrary polynomial degree. Still, in practical applications we regularly need to address situations with finitely many data points given over some finite interval, introducing the nontrivial problem of handling boundaries.

At first, let us check that divided differences can be defined recursively not just for simple distinct knots, but also for coalescing knots in the following way.

Definition 10.1

(Divided differences) Provided that a function f is smooth enough so that all occurring derivatives are defined, the divided
difference of order
for the function f in the points tktk+ is defined recursively as

[tk]f=f(tk),for\ =0

and

[tk,,tk+]f={[tk+1,,tk+]f[tk,,tk+1]ftk+tk,   if  tk<tk+,f()(tk)!,                          if  tk==tk+,for 1.

We now start from a knot sequence t of non-decreasing knots tk, which will serve as the breakpoints for the polynomial pieces

t:tktk+1

To define B(asis)-splines of a given degree d0, we need at least d+1 knots and tk<tk+d+1 for all possible k to avoid functions identically zero.

Definition 10.2

(B-spline) For a given knot sequence t the k-th
B-spline of degree d is defined as the divided difference with respect to y of the truncated power function (yx)+d in the knots tk,,tk+d+1, multiplied by the distance of the last and first knot in this subsequence, i.e.,

Bk,d,t(x)=Bk,d(x)=(tk+d+1tk)[tk,,tk+d+1]y(yx)+d.

At this point we have to say a bit more about knot sequences. There are two main settings. One is a biinfinite sequence of simple knots tk,kZ, with the most common example being the integers. For this cardinal setting tC:tk=k, we have of course

B2,tC,3(x)=M(x|2,1,0,1,2)=M4(x)=4[2,1,0,1,2,]t(tx)+3=B(x)

The other main setting is a finite knot sequence for a closed and bounded interval [a,b], typically a standard so-called (d+1)-regular knot sequence where, with nd+1,

t:t1t2tn+dtn+d+1,where tk<tk+d+1,k=1,,n,{\it i.e.}, the maximum multiplicity of any given knots is d+1t1=t1==td+1=0,tn+1=tn+2==tn+d+1=b,{\it i.e.}, the multiplicity of the endpoints is exactly d+1.

It must be strongly emphasized – although here we have mostly looked at just one spline function B – that it is the set of all B-splines over a knot sequence that is needed in applications, such as manipulating a curve or solving a differential equation by collocation.

Based on Leibniz’ formula for divided differences

[tj,tj+1,,tj+k](gh)==jj+k([tj,,tj+]g)([tj+,,tj+k]h).

one can establish a general recursion for B-splines (see item 2.4). With the B-splines of the initial level d=0, i.e., piecewise constants, defined as

Bk,0(x)={1, if\ x[tk,tk+1)0, if x[tk,tx+1)

we have the following recursion for d1

Bk,t,d(x)=xtktk+dtkBk,t,d1(x)+tk+d+1xtk+d+1tk+1Bk+1,t,d1(x)
10.1

and thus triangular schemes for the evaluation of splines of the form

kckBk,t,d(x)
10.2

analogous to the one introduced in subsection 3.6.

The continuity of a function of the form (10.2) is more tricky to describe. If all knots are simple, the polynomial pieces are glued together with continuity d1. For a multiple knot, i.e., a number which occurs repeatedly in the knot sequence, the continuity in this breakpoint is reduced by the multiplicity of the knot. For a more detailed discussion see [ 33 ] . The flexibility of different orders of smoothness in different breakpoints is especially important in curve design for CAD/CAM.

The divided difference formulation (see item 2.5) shows that each B-spline is a piecewise polynomial as a linear combination of truncated powers. One can establish by induction, using the recursion (10.1), that

supBk,t,d=[tk,tk+d+1]\ and Bk.t,d>0 for x(tk,tk+d+1).

Using Peano’s theorem as in item 2.7, we can derive that the integral of a B-spline is given as

Bk,t,d(x)dx=tk+d+1tkd+1

Note that the normalization factor in the B-spline definition is chosen to achieve a partition of unity, and we must change it to d+1 to get spline functions that qualify as probability density functions (item 2.15). Only in the cardinal case we get both properties at the same time.

As for item 2.2, we can show the basis property of the set of B-splines, namely that all piecewise polynomials of degree d, with smoothness in the breakpoints prescribed by the knot multiplicities, can be written in the form (10.2). This allows us also to show that the B-splines have smallest support under the given smoothness requirements. Also as in item 2.2, we can derive that the monomials can be represented as sums of the form (10.2), with coefficients given in terms of symmetric functions. Specifically, one gets that the B-splines form a partition of unity. The necessary general tool to show polynomial representation is the so-called Marsden identity [ 31 ] .

General multivariate splines are often defined by considering polynomials pieces over polyhedral domains (in two dimensions typically triangles or quadrilaterals) and gluing these pieces together at common facets with a given order of smoothness [ 10 ] , [ 4 ] . The geometric interpretation in item 2.18 is considered as the starting point for the investigation of so-called box splines [ 4 ] as multivariate piecewise polynomials, see Chapter 4 of [ 32 ] for a historical account including some reprints of Schoenberg’s letters from the early days.

Alternatively to the approach of gluing polynomial pieces, an extremal property generalizing (3.11) from item 2.1 has been used to define multivariate spline functions as those functions interpolating given data in some points and minimizing among all such interpolants an integral expression involving second order derivatives. These so-called thin-plate splines, introduced by Duchon [ 17 ] , however, are no longer piecewise polynomials for higher dimensions. For introductions to this subject see for example [ 18 ] , [ 19 ] .

Another really interesting topic, related to item 2.1, is the placing of interpolation conditions in some x-values, called nodes, which do not necessarily coincide with any knots (breakpoints), such that we have a unique interpolant for a given spline space. The fundamental result stems from [ 41 ] for Lagrange interpolation, was extended to Hermite and other settings in [ 29 ] , and is also related to Green’s functions as touched upon in item 2.6 [ 30 ] . The Schoenberg- Whitney theorem says that for a given B-spline basis, the evaluation matrix needed for interpolation is non-singular if and only if its entries on the main diagonal are positive, meaning that the value of the k-th B-spline in the k-th interpolation node is positive. I will not even think of going into details here, such as totally positive matrices, variation diminishing splines, etc., etc.

Your mission, if you decide to accept it, is to find out much more about splines and their applications. Good Luck !

Bibliography

1

C. de Boor, Best approximation properties of spline functions of odd degree, J. Math. Mech. 12 (1963), pp. 747–750. \includegraphics[scale=0.1]{ext-link.png}

2

C. de Boor, On calculating with B-splines, J. Approx. Theory 6 (1972), pp. 50–62. \includegraphics[scale=0.1]{ext-link.png}

3

C. de Boor, A practical guide to splines, Springer, New York, 1978.

4

C. de Boor, K. Höllig, and S. Riemenschneider, Box splines, Springer, New York, 1993.

5

C. de Boor and R.E. Lynch, On splines and their minimum properties, J. Math. Mech., 15 (1966), pp. 953–970. \includegraphics[scale=0.1]{ext-link.png}

6

V. Brun, Gauss’ fordelingslov, Norsk Matematisk Tidsskrift, 14 (1932), pp. 81–92.

7

P.L. Butzer, M. Schmidt and E.L. Stark, Observations on the history of central B-splines, Archive for History of Exact Sciences, 39 (1988), pp. 137–156. \includegraphics[scale=0.1]{ext-link.png}

8

A.S. Cavaretta, W. Dahmen and C.A. Micchelli, Stationary subdivision, Memoirs of AMS 93, 1991. \includegraphics[scale=0.1]{ext-link.png}

9

G.M. Chaikin, An algorithm for high-speed curve generation, Comp. Graphics and Image Proc. 3 (1974), pp. 346–349. \includegraphics[scale=0.1]{ext-link.png}

10

C.K. Chui, Multivariate Splines, CBMS 54, SIAM, Philadelphia, 1988. \includegraphics[scale=0.1]{ext-link.png}

11

C.K. Chui, An Introduction to Wavelets, Academic Press, Boston, 1992.

12

M.G. Cox, The numerical evaluation of B-splines. J. Inst. Math. Applics. 10 (1972), pp. 134–139. \includegraphics[scale=0.1]{ext-link.png}

13

H.B. Curry, and I.J. Schoenberg, On Polya frequency functions IV. The fundamental spline and their limits, J. d’Analyse Math. bf 17 (1966), pp. 71–107. \includegraphics[scale=0.1]{ext-link.png}

14

I. Daubechies, Ten Lectures on Wavelets, CBMS 61, SIAM, Philadelphia, 1992. \includegraphics[scale=0.1]{ext-link.png}

15

P. J. Davis, Interpolation and Approximation 2nd edition, Dover, New York, 1975 (originally published 1963).

16

T. DeRose, M. Kass, and T. Truong, Subdivision surfaces in character animation, Computer Graphics 32 (1998), pp. 85–94. \includegraphics[scale=0.1]{ext-link.png}

17

J. Duchon, J., Interpolation des fonctions de deux variables suivant le principe de la flexion des plaques minces, RAIRO Analyse numérique, 10 (1976), pp. 5–12.

18

N. Dyn, Interpolation of scattered data by radial functions, in Topics in Multivariate Approximation, C.K. Chui, L.L. Schumaker, and F.I. Utreras (eds), Academic Press, New York, 1987, pp. 47–61.

19

N. Dyn, Interpolation and approximation by radial and related functions, in Approximation Theory VI, Vol. 1, C.K. Chui, L.L. Schumaker, and J.D. Ward (eds.), Academic Press, New York, 1989, pp. 211–234.

20

N. Dyn, Analysis of convergence and smoothness by the formalism of Laurent polynomials, in Tutorials on Multiresolution in Geometric Modelling, A. Iske, E. Quak, and M.S. Floater (eds.), Springer, Heidelberg, 2002, pp. 51–68. \includegraphics[scale=0.1]{ext-link.png}

21

F. Eggenberger, and G. Polya, Über die Statistik verketteter Vorgänge, Z. Angew. Math. Mech. 1 (1923), pp. 279–289. \includegraphics[scale=0.1]{ext-link.png}

22

G. Farin, Curves and Surfaces for Computer-Aided Geometric Design, 4th edition, Academic Press, San Diego, 1998.

23

J.E. Fjelstaad, Bemerking til Viggo Brun: Gauss’ fordelingslov, Norsk Matematisk Tidsskrift 19 (1937), pp. 69–71.

24

B. Friedman, A simple urn model, Comm. Pure Appl. Math. 2 (1949), pp. 59–70. \includegraphics[scale=0.1]{ext-link.png}

25

R.N. Goldman, Urn models, approximations, and splines. J. Approx. Theory 54 (1988), pp. 1–66. \includegraphics[scale=0.1]{ext-link.png}

26

R.N. Goldman, Recursive Triangles, in Computation of Curves and Surfaces, W. Dahmen, M. Gasca and C. M. Micchelli (eds), Kluwer, Dordrecht, 1989, pp. 27–72. \includegraphics[scale=0.1]{ext-link.png}

27

J.C. Holladay, A smoothest curve approximation, Math. Tables Aid. Comput. 11 (1957), pp. 233–243. \includegraphics[scale=0.1]{ext-link.png}

28

J. Hoschek, and D. Lasser, Fundamentals of Computer Aided Geometric Design, AKPeters, Wellesley, 1993.

29

S. Karlin, and Z. Ziegler, Chebyshevian spline functions, SIAM J. Numer. Anal. 3(1966), pp. 514–543. \includegraphics[scale=0.1]{ext-link.png}

30

M.G. Krein, and G. Finkelstein, Sur les fonctions de Green completement non-negatives des operateurs differentiels ordinaires, Doklady Akad. Nauk. SSSR 24 (1939), pp. 202–223.

31

M.J. Marsden, An identity for spline functions with applications to variation-diminishing spline approximation, J. Approx Theory 3 (1970), pp. 7–49. \includegraphics[scale=0.1]{ext-link.png}

32

C.A. Micchelli, Mathematical Aspects of Geometric Modelling, CBMS 65, SIAM, Philadelphia, 1995.

33

E. Quak, Nonuniform B-splines and B-wavelets, in Tutorials on Multiresolution in Geometric Modelling, A. Iske, E. Quak, and M.S. Floater (eds.), Springer, Heidelberg, 2002, pp. 101–146. \includegraphics[scale=0.1]{ext-link.png}

34

G. de Rham, Un peu de mathématique à propos d’une courbe plane, Elemente der Mathematik 2 (1947), pp. 73–76, pp. 89–97.

35

C. Runge, Über empirische Funktionen und die Interpolation zwischen äquidistanten Ordinaten, Z. f. Math. u. Phys. 46 (1901), pp. 224–243.

36

M.A. Sabin, The use of piecewise forms for the numerical representation of shape, Dissertation, Hungarian Academy of Sciences, 1977.

37

M.A. Sabin, Subdivision of box-splines; Eigenanalysis and artifacts of subdivision curves and surfaces, in Tutorials on Multiresolution in Geometric Modelling, A. Iske, E. Quak, and M. S. Floater (eds.), Springer, Heidelberg, 2002, pp. 3–23, pp. 69–92. \includegraphics[scale=0.1]{ext-link.png}

38

I.J. Schoenberg, Contributions to the problem of approximation of equidistant data by analytic functions, Part A: On the problem of smoothing or graduation. A first class of analytic approximation formulae. Part B: On the second problem of osculatory interpolation. A second class of analytic approximation formulae. Quart. Appl. Math. 4 (1946), pp. 45–99 and pp. 112–141.

39

I.J. Schoenberg, On spline functions, in Inequalities I, O. Shisha (ed.), Academic Press, New York, 1967.

40

I.J. Schoenberg, Cardinal Spline Interpolation, CBMS 12, SIAM, Philadelphia,
1973. \includegraphics[scale=0.1]{ext-link.png}

41

I.J. Schoenberg, and A. Whitney, On Pólya frequency functions III. The positivity of translation determinants with an application to the interpolation problem by spline curves, Trans. Amer. Math. Soc. 74 (1953), pp. 246–259.

42

L.L. Schumaker, Spline Functions: Basic Theory, Wiley, New York, 1981.


43

A. Sommerfeld, Eine besonders anschauliche Ableitung des Gaussischen Fehlergesetzes, in Festschrift Ludwig Boltzmann gewidmet zum 60.Geburtstage, 20. Februar 1904, Barth, Leipzig, 1904, pp. 848–859.

44

J.L. Walsh, J.H. Ahlberg and E.N. Nilson, Best approximation properties of the spline fit, J. Math. Mech. 11 (1962), pp. 225–234. \includegraphics[scale=0.1]{ext-link.png}