Abstract
We obtain new univariate Shepard operators using polynomials that are constructed such that they fit the interpolation data in a weighted least squares approximation way. We study the degree of exactness, the linearity and the remainder for the corresponding interpolation formula.
Authors
Andra Malina
Faculty of Mathematics and Computer Science, BabeşBolyai University, ClujNapoca, Romania
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy, ClujNapoca, Romania
Keywords
Univariate Shepard operator; Weighted least squares approximation; Interpolation formula; Degree of exactness; Remainder term
Paper coordinates
A. Malina, Univariate Shepard operators combined with least squares fitting polynomials, Filomat, 38 (2024) no. 15, pp. 55075516. https://doi.org/10.2298/FIL2415507M
About this paper
Journal
Filomat
Publisher Name
Faculty of Sciences and Mathematics, University of Nis, Serbia
Print ISSN
03545180
Online ISSN
24060933
google scholar link
Paper (preprint) in HTML form
Univariate Shepard operators combined with least squares fitting polynomials
Abstract
We obtain new univariate Shepard operators using polynomials that are constructed such that they fit the interpolation data in a weighted least squares approximation way. We study the degree of exactness, the linearity and the remainder for the corresponding interpolation formula.
1 Introduction
D. Shepard introduced in 1968 in [13] a very powerful method for approximating a given function $f$ on a set of scattered data, method that nowadays is named after him. The procedure has an easy implementation and it is expressed as a combination between some basis functions and the values of the function $f$ on a given set of interpolation nodes. However, two of its major drawbacks are the high computational cost and the low degree of exactness. Several authors have studied them and proposed different solutions to overcome them, such as modifying the basis functions or combining the Shepard operator with other interpolation operators for an increased degree of exactness (see, e.g., [1]; [2]; [3]; [4]; [5]; [6]; [7]; [8]; [9]; [10]).
In the univariate case, when $f$ is a realvalued function defined on a subset $X$ of $\mathbb{R}$, for a given set of $K$ interpolation nodes, ${x}_{i}\in X,i=1,\mathrm{\dots},K$, the Shepard operator is defined as
$${S}_{\mu}f(x)=\sum _{i=1}^{K}{A}_{i,\mu}(x)\cdot f({x}_{i}),$$  (1) 
with the basis functions ${A}_{i,\mu}$ given by
$${A}_{i,\mu}(x)=\frac{{\leftx{x}_{i}\right}^{\mu}}{\sum _{j=1}^{K}{\leftx{x}_{j}\right}^{\mu}},i=1,\mathrm{\dots},K,{x}_{i}\ne {x}_{j},\text{for}i\ne j,j=1,\mathrm{\dots},K,$$  (2) 
$x\in X$ and $\mu >0$ an arbitrary parameter.
This paper focuses on introducing a new univariate Shepard operator, combined with polynomials constructed based on the least squares approach. In Section 2, after we construct these polynomials, we study several properties of them and of the combined Shepard operators (interpolation property, degree of exactness, linearity). Finally, we study the errors, based on Peano’s Theorem. Section 3 is dedicated to numerical examples that show the benefits of these Shepard operators.
2 Shepard operators combined with polynomials constructed by the least squares method
R. J. Renka introduced in 1988 in [11] an algorithm for improving the bivariate Shepard operator, considering a quadratic polynomial that interpolates the function $f$ on a set of given nodes and also approximates the data in a weighted least squares way. Later on, in 1999 in [12] he improved this method by replacing the quadratic polynomial with a cubic one. In 2010 in [14], W. I. Thacker et al. emphasized the main disadvantages of these two methods and suggested the combination of the Shepard operator with a linear polynomial that still fits the data in a weighted least squares sense.
Using some ideas for the bivariate case presented in the above mentioned papers, we are going to consider an improvement for the classical Shepard operator in the univariate case, by combining it with polynomials of degree $n,n\in \mathbb{N}$, constructed following the weighted least squares approximation technique.
Consider $X\subset \mathbb{R}$, $f:X\to \mathbb{R}$ and $K$ given real nodes, denoted by ${x}_{j}$, $j=1,\mathrm{\dots},K$. The values of the function $f$ on the given nodes are known and denoted by ${f}_{j}=f({x}_{j}),j=1,\mathrm{\dots},K$.
Under these assumptions, for a point $x\in X$, let us define the $n$th degree polynomial function ${C}_{j}^{n}[f]$, $j=1,\mathrm{\dots},K$, $n\in \mathbb{N}$, as
$${C}_{j}^{n}[f](x)={f}_{j}+\sum _{k=1}^{n}{a}_{j,k}{(x{x}_{j})}^{k},$$  (3) 
where the coefficients ${a}_{j,k}$ are found such that they minimize the sum of the weighted squared residuals
$${E}_{j}=\sum _{\begin{array}{c}i=1\\ i\ne j\end{array}}^{K}{\lambda}_{i,j}{\left[{C}_{j}^{n}[f]({x}_{i}){f}_{i}\right]}^{2},$$  (4) 
where
$${\lambda}_{i,j}=\frac{{\left{x}_{i}{x}_{j}\right}^{\mu}}{\sum _{\begin{array}{c}k=1\\ k\ne i\end{array}}^{K}{\left{x}_{i}{x}_{k}\right}^{\mu}},$$  (5) 
for $i,j=1,\mathrm{\dots},K$ and $\mu >0$.
In order to find the coefficients ${a}_{j,k}$ (i.e, obtain the minimum of expression (4)), we follow the weighted least squares reasoning, take the partial derivatives of ${E}_{j}$ with respect to each unknown, set them to zero and solve the resulting system:
$$\frac{\partial {E}_{j}}{\partial {a}_{j,k}}=0,\text{for each}k=1,\mathrm{\dots},n\text{and}j=1,\mathrm{\dots},K.$$ 
Further, for every $j=1,\mathrm{\dots},K$ one obtains
$$\frac{\partial {E}_{j}}{\partial {a}_{j,k}}=2\sum _{\begin{array}{c}i=1\\ i\ne j\end{array}}^{K}{\lambda}_{i,j}\left[\sum _{p=1}^{n}{a}_{j,p}{({x}_{i}{x}_{j})}^{p}+({f}_{j}{f}_{i})\right]\cdot {({x}_{i}{x}_{j})}^{k}=0,\text{for each}k=1,\mathrm{\dots},n.$$ 
Let us make the notation
$${x}_{i,j}^{p}=\sum _{\begin{array}{c}i=1\\ i\ne j\end{array}}^{K}{\lambda}_{i,j}\cdot {({x}_{i}{x}_{j})}^{p}.$$ 
Then, the system of normal equations that has to be solved in order to find the coefficients ${a}_{j,k},k=1,\mathrm{\dots},n,$ has the form
$$\{\begin{array}{cc}\hfill {a}_{j,1}{x}_{i,j}^{2}+{a}_{j,2}{x}_{i,j}^{3}+\mathrm{\dots}+& {a}_{j,n}{x}_{i,j}^{n+1}=\sum _{\begin{array}{c}i=1\\ i\ne j\end{array}}^{K}{\lambda}_{i,j}\cdot ({x}_{i}{x}_{j})\cdot ({f}_{i}{f}_{j})\hfill \\ \hfill \mathrm{\dots}\\ \hfill {a}_{j,1}{x}_{i,j}^{k+1}+{a}_{j,2}{x}_{i,j}^{k+2}+\mathrm{\dots}+& {a}_{j,n}{x}_{i,j}^{k+n}=\sum _{\begin{array}{c}i=1\\ i\ne j\end{array}}^{K}{\lambda}_{i,j}\cdot {({x}_{i}{x}_{j})}^{k}\cdot ({f}_{i}{f}_{j})\hfill \\ \hfill \mathrm{\dots}\\ \hfill {a}_{j,1}{x}_{i,j}^{n+1}+{a}_{j,2}{x}_{i,j}^{n+2}+\mathrm{\dots}+& {a}_{j,n}{x}_{i,j}^{2n}=\sum _{\begin{array}{c}i=1\\ i\ne j\end{array}}^{K}{\lambda}_{i,j}\cdot {({x}_{i}{x}_{j})}^{n}\cdot ({f}_{i}{f}_{j}),\hfill \end{array}$$  (6) 
for each $j=1,\mathrm{\dots},K$.
For every $j=1,\mathrm{\dots},K,$ we can write the normal equations that appear above in a matricial form as
$${M}_{j}\cdot {a}_{j}={b}_{j},$$  (7) 
where ${M}_{j}$ is a $n\times n$ matrix having on the entry $(r,s)$ the element $\sum _{\begin{array}{c}i=1\\ i\ne j\end{array}}^{K}{\lambda}_{i,j}\cdot {({x}_{i}{x}_{j})}^{r+s}$, ${b}_{j}$ is a vector of $n$ elements with $\sum _{\begin{array}{c}i=1\\ i\ne j\end{array}}^{K}{\lambda}_{i,j}\cdot {({x}_{i}{x}_{j})}^{k}\cdot ({f}_{i}{f}_{j})$ on the $k$th entry and ${a}_{j}={({a}_{j,1},{a}_{j,2},\mathrm{\dots},{a}_{j,n})}^{T}$ is the vector of unknowns.
Theorem 2.1
The operator ${C}_{j}^{n}[f]$ defined in (3) satisfies the following interpolation property
$${C}_{j}^{n}[f]({x}_{j})={f}_{j},j=1,\mathrm{\dots},K.$$ 
Proof 2.2.
For any $j=1,\mathrm{\dots},K$, one has
$${C}_{j}^{n}[f]({x}_{j})={f}_{j}+\sum _{k=1}^{n}{a}_{j,k}{({x}_{j}{x}_{j})}^{k}={f}_{j}.$$ 
Theorem 2.3.
The operator ${C}_{j}^{n}[f],j=1,\mathrm{\dots},K$, has the degree of exactness n, i.e.,
$$\mathrm{dex}({C}_{j}^{n}[f])=n,j=1,\mathrm{\dots},K,$$ 
where ”$\mathrm{dex}$” denotes the degree of exactness.
Proof 2.4.
For $x\in X$ we have the following cases for ${C}_{j}^{n}[f],j=1,\mathrm{\dots},K$:

Case 1.
$f(x)={e}_{0}(x)={x}^{0}$. We get ${a}_{j,k}=0$, $k=1,\mathrm{\dots},n,$ and obviously
$${C}_{j}^{n}[{e}_{0}](x)={e}_{0}({x}_{j})+0\cdot \sum _{k=1}^{n}{(x{x}_{j})}^{k}=1={e}_{0}(x).$$ 
Case 2.
$f(x)={e}_{n}(x)={x}^{n}$. We obtain the following solution for the coefficients ${a}_{j,k}$:
$${a}_{j,k}=\left(\genfrac{}{}{0pt}{}{n}{nk}\right){x}_{j}^{nk},k=1,\mathrm{\dots},n$$ (8) and
${C}_{j}^{n}[{e}_{n}](x)$ $={e}_{n}({x}_{j})+{\displaystyle \sum _{k=1}^{n}}\left({\displaystyle \genfrac{}{}{0pt}{}{n}{nk}}\right){x}_{j}^{nk}{(x{x}_{j})}^{k}$ $=\left({\displaystyle \genfrac{}{}{0pt}{}{n}{n}}\right){x}_{j}^{n0}{(x{x}_{j})}^{0}+{\displaystyle \sum _{k=1}^{n}}\left({\displaystyle \genfrac{}{}{0pt}{}{n}{nk}}\right){x}_{j}^{nk}{(x{x}_{j})}^{k}$ $={\displaystyle \sum _{k=0}^{n}}\left({\displaystyle \genfrac{}{}{0pt}{}{n}{k}}\right){x}_{j}^{nk}{(x{x}_{j})}^{k}={({x}_{j}+x{x}_{j})}^{n}={x}^{n}={e}_{n}(x).$ 
Case 3.
$f(x)={e}_{p}(x),p=1,\mathrm{\dots},n1$. In this situation we have
$${a}_{j,r}=\left(\genfrac{}{}{0pt}{}{p}{pr}\right){x}_{j}^{pr}\text{for}r=1,\mathrm{\dots},p\text{and}{a}_{j,s}=0\text{for}s=p+1,\mathrm{\dots},n$$ and
$${C}_{j}^{n}[{e}_{p}](x)={e}_{p}({x}_{j})+\sum _{r=1}^{p}\left(\genfrac{}{}{0pt}{}{p}{pr}\right){x}_{j}^{pr}{(x{x}_{j})}^{r}={x}^{p}={e}_{p}(x).$$ 
Case 4.
$f(x)={e}_{n+1}(x)$. It is obvious that ${C}_{j}^{n}[{e}_{n+1}](x)\ne {x}^{n+1}$ since
$${C}_{j}^{n}[{e}_{n+1}](x)={e}_{n+1}({x}_{j})+\sum _{k=1}^{n}{a}_{j,k}{(x{x}_{j})}^{k}$$ and no term ${x}^{n+1}$ appears.
In conclusion, $\mathrm{dex}({C}_{j}^{n}[f])=n,j=1,\mathrm{\dots},K$.
Theorem 2.5.
The operator ${C}_{j}^{n}[f]$ is linear.
Proof 2.6.
We have to show that for ${g}_{1},{g}_{2}:X\to \mathbb{R}$, arbitrarily chosen, and $\alpha ,\beta \in \mathbb{R}$, one has for $x\in X$:
$${C}_{j}^{n}[\alpha {g}_{1}+\beta {g}_{2}](x)=\alpha {C}_{j}^{n}[{g}_{1}](x)+\beta {C}_{j}^{n}[{g}_{2}](x),j=1,\mathrm{\dots},K.$$  (9) 
Let us define the terms that appear in (9):
${C}_{j}^{n}[{g}_{1}](x)$  $={g}_{1}({x}_{j})+{\displaystyle \sum _{k=1}^{n}}{a}_{j,k}^{\prime}{(x{x}_{j})}^{k},j=1,\mathrm{\dots},K,$  
${C}_{j}^{n}[{g}_{2}](x)$  $={g}_{2}({x}_{j})+{\displaystyle \sum _{k=1}^{n}}{a}_{j,k}^{\prime \prime}{(x{x}_{j})}^{k},j=1,\mathrm{\dots},K,$  
${C}_{j}^{n}[\alpha {g}_{1}+\beta {g}_{2}](x)$  $=(\alpha {g}_{1}+\beta {g}_{2})({x}_{j})+{\displaystyle \sum _{k=1}^{n}}{a}_{j,k}{(x{x}_{j})}^{k}$  
$=\alpha {g}_{1}({x}_{j})+\beta {g}_{2}({x}_{j})+{\displaystyle \sum _{k=1}^{n}}{a}_{j,k}{(x{x}_{j})}^{k},j=1,\mathrm{\dots},K.$ 
By solving similar systems as in (6), we obtain the following relation between the coefficients that appear above
$${a}_{j,k}=\alpha {a}_{j,k}^{\prime}+\beta {a}_{j,k}^{\prime \prime},\text{for every}k=1,\mathrm{\dots},n\text{and}j=1,\mathrm{\dots},K.$$ 
Now, one has
${C}_{j}^{n}[\alpha {g}_{1}+\beta {g}_{2}](x)$  $=\alpha {g}_{1}({x}_{j})+\beta {g}_{2}({x}_{j})+{\displaystyle \sum _{k=1}^{n}}(\alpha {a}_{j,k}^{\prime}+\beta {a}_{j,k}^{\prime \prime}){(x{x}_{j})}^{k}$  
$=\alpha \left[{g}_{1}({x}_{j})+{\displaystyle \sum _{k=1}^{n}}{a}_{j,k}^{\prime}{(x{x}_{j})}^{k}\right]+\beta \left[{g}_{2}({x}_{j})+{\displaystyle \sum _{k=1}^{n}}{a}_{j,k}^{\prime \prime}{(x{x}_{j})}^{k}\right]$  
$=\alpha {C}_{j}^{n}[{g}_{1}](x)+\beta {C}_{j}^{n}[{g}_{2}](x),j=1,\mathrm{\dots},K,$ 
and so (9) is proved.
Definition 2.7.
For $f:X\to \mathbb{R}$ and the set of $K$ interpolation nodes, using the $n$th degree polynomial given in (3), we can define the univariate Shepard operator combined with a $n$th degree polynomial as
$$S{P}_{n}[f](x)=\sum _{j=1}^{K}{A}_{j,\mu}(x)\cdot {C}_{j}^{n}[f](x),$$  (10) 
with ${A}_{j,\mu}$ defined in (2) using the given parameter $\mu >0$.
Theorem 2.8.
The following interpolation property holds
$$S{P}_{n}[f]({x}_{j})=f({x}_{j}),j=1,\mathrm{\dots},K.$$ 
Proof 2.9.
Theorem 2.10.
The operator $S{P}_{n}$ is linear.
Proof 2.11.
For ${g}_{1},{g}_{2}:X\to \mathbb{R}$ arbitrarily chosen and $\alpha ,\beta \in \mathbb{R}$, using the linearity of ${C}_{j}^{n}$ showed in Theorem 2.5, one has
$S{P}_{n}[\alpha {g}_{1}+\beta {g}_{2}](x)$  $={\displaystyle \sum _{j=1}^{K}}{A}_{j,\mu}(x)\cdot {C}_{j}^{n}[\alpha {g}_{1}+\beta {g}_{2}](x)$  
$={\displaystyle \sum _{j=1}^{K}}{A}_{j,\mu}(x)\cdot \left[\alpha {C}_{j}^{n}[{g}_{1}](x)+\beta {C}_{j}^{n}[{g}_{2}](x)\right]$  
$=\alpha {\displaystyle \sum _{j=1}^{K}}{A}_{j,\mu}(x)\cdot {C}_{j}^{n}[{g}_{1}](x)+\beta {\displaystyle \sum _{j=1}^{K}}{A}_{j,\mu}(x)\cdot {C}_{j}^{n}[{g}_{2}](x)$  
$=\alpha S{P}_{n}[{g}_{1}](x)+\beta S{P}_{n}[{g}_{2}](x),$ 
the linearity of $S{P}_{n}$ being proved.
Theorem 2.12.
The Shepard operator $S{P}_{n}$ has degree of exactness n.
Proof 2.13.
We know that for some arbitrary operators ${R}_{i},i=1,\mathrm{\dots},K$, with $\mathrm{dex}({R}_{i})={r}_{i},i=1,\mathrm{\dots},K,$ we have $\mathrm{dex}({S}_{R})=\mathrm{min}\{{r}_{1},\mathrm{\dots},{r}_{K}\}$, where
$${S}_{R}f(x)=\sum _{i=1}^{K}{A}_{i,\mu}(x)\cdot {R}_{i}(x).$$ 
Taking into account this property and the fact that $\mathrm{dex}({C}_{j}^{n})=n,\forall j=1,\mathrm{\dots},K$, we obtain the desired conclusion.
We now introduce the interpolation formula for the univariate Shepard combined with a polynomial, that is given by
$$f=S{P}_{n}[f]+{R}_{n}[f],$$ 
with ${R}_{n}[f]$ denoting the remainder.
Considering the space ${H}^{m}[a,b]$, $m\in \mathbb{N}\setminus \{0\}$ of functions $f\in {C}^{m1}[a,b]$ (continuously differentiable up to order $m1$, inclusively) with ${f}^{(m1)}$ absolutely continuous on $[a,b]$, we obtain the following result for the remainder of the formula:
Theorem 2.14.
If $f\in {H}^{n+1}[a,b]$, then
$${R}_{n}[f](x)={\int}_{a}^{b}{\varphi}_{n}(x,t)\cdot {f}^{(n+1)}(t)\text{dt},$$ 
where
$${\varphi}_{n}(x,t)=\frac{{(xt)}_{+}^{n}}{n!}\sum _{j=1}^{K}{A}_{j,\mu}(x)\cdot \left[\frac{{({x}_{j}t)}_{+}^{n}}{n!}+\sum _{k=1}^{n}{a}_{j,k}{(x{x}_{j})}^{k}\right],$$  (12) 
with ${a}_{j,k}$ given as solutions of $\frac{\partial {E}_{j}}{\partial {a}_{j,k}}=0$, for each $k=1,\mathrm{\dots},n$, for
$${E}_{j}=\sum _{\begin{array}{c}i=1\\ i\ne j\end{array}}^{K}{\lambda}_{i,j}{\left[\frac{{({x}_{j}t)}_{+}^{n}}{n!}+\sum _{k=1}^{n}{a}_{j,k}{({x}_{i}{x}_{j})}^{k}\frac{{({x}_{i}t)}_{+}^{n}}{n!}\right]}^{2}$$ 
and ${\lambda}_{i,j}$ given in (5), $j=1,\mathrm{\dots},K.$
Proof 2.15.
The degree of exactness for the Shepard operator $S{P}_{n}$ is n. Using now the Peano’s theorem, one gets
$${R}_{n}[f](x)={\int}_{a}^{b}{\varphi}_{n}(x,t)\cdot {f}^{(n+1)}(t)\text{dt},$$ 
with
$${\varphi}_{n}(\cdot ,t)={R}_{n}\left[\frac{{(\cdot t)}_{+}^{n}}{n!}\right]=\frac{{(\cdot t)}_{+}^{n}}{n!}\sum _{j=1}^{K}{A}_{j,\mu}(\cdot )\cdot {C}_{j}^{n}\left[\frac{{(\cdot t)}_{+}^{n}}{n!}\right].$$ 
Finally, for all $x\in [a,b]$, we obtain
$${\varphi}_{n}(x,t)=\frac{{(xt)}_{+}^{n}}{n!}\sum _{j=1}^{K}{A}_{j,\mu}(x)\cdot \left[\frac{{({x}_{j}t)}_{+}^{n}}{n!}+\sum _{k=1}^{n}{a}_{j,k}{(x{x}_{j})}^{k}\right],$$ 
where ${a}_{j,k}$ are solutions of $\frac{\partial {E}_{j}}{\partial {a}_{j,k}}=0$, for each $k=1,\mathrm{\dots},n$, for
$${E}_{j}=\sum _{\begin{array}{c}i=1\\ i\ne j\end{array}}^{K}{\lambda}_{i,j}{\left[\frac{{({x}_{j}t)}_{+}^{n}}{n!}+\sum _{k=1}^{n}{a}_{j,k}{({x}_{i}{x}_{j})}^{k}\frac{{({x}_{i}t)}_{+}^{n}}{n!}\right]}^{2}$$ 
concluding in this way the proof.
3 Test results
For the numerical experiments we consider four wellknown realvalued test functions (see, e.g., [9]):
$$\begin{array}{cc}\text{Cliff:}\hfill & {f}_{1}(x)=\frac{1}{2}\mathrm{tanh}(9x+1)+0.5,\hfill \\ & \\ \text{Gentle:}\hfill & {f}_{2}(x)=\frac{1}{3}\mathrm{exp}[\frac{81}{16}{(x0.5)}^{2}],\hfill \\ & \\ \text{Saddle:}\hfill & {f}_{3}(x)=\frac{1.25}{6+6{(3x1)}^{2}},\hfill \\ & \\ \text{Steep:}\hfill & {f}_{4}(x)=\frac{1}{3}\mathrm{exp}[\frac{81}{4}{(x0.5)}^{2}].\hfill \end{array}$$  (13) 
For each function ${f}_{i},i=1,\mathrm{\dots},4$, we compare the test results obtained by considering the linear, quadratic and cubic interpolants $S{P}_{j}[{f}_{i}],j=1,2,3,$ with the ones obtained for some other combined Shepard operators, wellknown in the literature. We study the maximum approximation errors for the Shepard operators of Lagrange, Taylor and Bernoulli type, for all of them considering the first, second and third order. They are denoted by $S{L}_{k},S{T}_{k}$ and $S{B}_{k}$, $k=1,2,3$, respectively. We recall their definitions in the sequel.
3.1 Univariate ShepardLagrange operator (see, e.g., [5])
For $K$ distinct points ${x}_{i}$ that belong to $X\subset \mathbb{R}$ and the realvalued function $f$ defined on $X$ such that the data $f({x}_{i}),i\in \{1,\mathrm{\dots},K\}$ are known, the univariate ShepardLagrange operator is defined as
$$S{L}_{k}[f](x)=\sum _{j=1}^{K}{A}_{j,\mu}(x)\cdot {L}_{k}^{j}[f](x),$$  (14) 
with
$${L}_{k}^{j}[f](x)=\sum _{i=0}^{k}\frac{\prod _{\alpha =0,\alpha \ne i}^{k}(x{x}_{j+\alpha})}{\prod _{\alpha =0,\alpha \ne i}^{k}({x}_{j+i}{x}_{j+\alpha})}\cdot f({x}_{j+i}),$$  (15) 
${x}_{K+i}={x}_{Ki},i=1,\mathrm{\dots},k,$ and ${A}_{j,\mu}$ defined in (2), $\mu >0$.
3.2 Univariate ShepardTaylor operator (see, e.g., [5])
For $f:X\to \mathbb{R}$ and $K$ distinct interpolation nodes ${x}_{j},j\in \{1,\mathrm{\dots},K\}$, consider the sets
$$\mathrm{\Delta}=\{{\eta}_{j,i}{\eta}_{j,i}(f)={f}^{(i)}({x}_{j})\text{with}j=1,\mathrm{\dots},K;i=0,\mathrm{\dots},k;k\in {\mathbb{N}}^{\ast}\}$$ 
and
$${\mathrm{\Delta}}_{j}(f)=\{{\eta}_{j,p}p=0,\mathrm{\dots},k\}$$ 
such that ${\mathrm{\Delta}}_{j}\subset \mathrm{\Delta}$ is a subset of $\mathrm{\Delta}$ associated to ${\eta}_{j}$, having ${\eta}_{j}\in {\mathrm{\Delta}}_{j}$, for all $j=1,\mathrm{\dots},K$.
Then, the univariate ShepardTaylor operator is defined as
$$S{T}_{k}[f](x)=\sum _{j=1}^{K}{A}_{j,\mu}(x)\cdot {T}_{k}^{j}[f](x),$$  (16) 
with
$${T}_{k}^{j}[f](x)=\sum _{i=0}^{k}\frac{{(x{x}_{j})}^{i}}{i!}\cdot {f}^{(i)}({x}_{j})$$  (17) 
and ${A}_{j,\mu}$ defined in (2), $\mu >0$.
3.3 Univariate ShepardBernoulli operator (see, e.g., [6])
Suppose there are $K$ given points ${x}_{j}$ in $X\subset \mathbb{R}$ and ${x}_{K+1}={x}_{K1}$. Then, we can define the univariate ShepardBernoulli operator as follows
$$S{B}_{k}[f](x)=\sum _{j=1}^{K}{A}_{j,\mu}(x)\cdot {B}_{k}[f;{x}_{j},{x}_{j+1}](x),$$  (18) 
with ${A}_{j,\mu}$ defined in (2) and the Bernoulli operators ${B}_{k}$ given by
$${B}_{k}[f;a,b]=f(a)+\sum _{j=1}^{k}\frac{{h}^{j1}}{j!}\cdot \left({B}_{j}\left(\frac{xa}{h}\right){B}_{j}\right)\cdot \left({f}^{(j1)}(a){f}^{(j1)}(b)\right)$$ 
for $f\in {C}^{k}[a,b],k\ge 1$, $h=ba$.
${B}_{n}$ are the Bernoulli numbers, i.e. the values of the Bernoulli polynomials ${B}_{n}(x)$ at $x=0$. The Bernoulli polynomials are defined recursively as
$$\{\begin{array}{cc}{B}_{0}(x)=1,\hfill & \\ {B}_{n}^{\prime}(x)=n{B}_{n1}(x),n\ge 1,\hfill & \\ {\int}_{0}^{1}{B}_{n}(x)\mathit{d}x=0,n\ge 1.\hfill & \end{array}$$ 
We consider a set of $K=50$ equally spaced interpolation nodes from the interval $X=[0,1]$ and set the $\mu $ parameter’s value to $2$. Table 1 presents the maximum interpolation errors for the classical Shepard operator ${S}_{\mu}f$ introduced in (1) and the linear $S{P}_{1}$, quadratic $S{P}_{2}$ and cubic $S{P}_{3}$ Shepard operators, introduced in (10) for $n=1,2,3$, respectively. In addition, we present the maximum approximation errors for the Shepard operators of Lagrange, Taylor and Bernoulli type, of order 1, 2 and 3, respectively. We can observe that the three new operators produce better approximation results than the classical Shepard operator. Moreover, as it was expected, higher degrees polynomials produce smaller approximation errors. We can observe that in the linear and quadratic cases, the approximation results for the Shepard operators combined with least squares fitting polynomials are close to the best approximation results for most of the functions. In the cubic cases the new Shepard operators obtained produce the smallest interpolation errors.
${f}_{1}$  ${f}_{2}$  ${f}_{3}$  ${f}_{4}$  

${S}_{\mu}f$  0.0247  0.0043  0.0024  0.0084 
$S{P}_{1}$  0.0157  0.0025  0.0024  0.0041 
$S{L}_{1}$  0.0081  0.0020  0.0013  0.0030 
$S{T}_{1}$  0.0067  0.0020  0.0012  0.0027 
$S{B}_{1}$  0.0081  0.0020  0.0013  0.0030 
$S{P}_{2}$  0.0066  0.0012  8.7983e04  0.0041 
$S{L}_{2}$  0.0048  9.4582e04  0.0014  0.0027 
$S{T}_{2}$  0.0050  0.0010  7.7720e04  0.0026 
$S{B}_{2}$  0.0048  0.0010  7.7771e04  0.0027 
$S{P}_{3}$  0.0053  5.4032e04  6.7304e04  0.0023 
$S{L}_{3}$  0.0671  0.0050  0.0049  0.0024 
$S{T}_{3}$  0.0094  8.7148e04  8.7393e04  0.0024 
$S{B}_{3}$  0.0104  8.8533e04  8.7689e04  0.0024 
We also test these operators on a second set of $K=50$ Chebyshev nodes, defined as
$${x}_{j}=\frac{1}{2}+\frac{1}{2}\mathrm{cos}\left(\frac{2j1}{2K}\pi \right),j=1,\mathrm{\dots},K.$$ 
We present the maximum approximation errors in Table 2. In this case we can see that the new operators produce the best results in the quadratic case for all the functions, except for ${f}_{4}$. In the cubic case they produce the smallest interpolation errors for all test functions. Very good results are obtained in the linear case as well.
${f}_{1}$  ${f}_{2}$  ${f}_{3}$  ${f}_{4}$  

${S}_{\mu}f$  0.0246  0.0064  0.0046  0.0160 
$S{P}_{1}$  0.0119  0.0018  0.0019  0.0066 
$S{L}_{1}$  0.0078  0.0034  0.0016  0.0031 
$S{T}_{1}$  0.0094  0.0035  0.0017  0.0021 
$S{B}_{1}$  0.0083  0.0035  0.0016  0.0031 
$S{P}_{2}$  0.0054  9.3156e04  0.0011  0.0065 
$S{L}_{2}$  0.0119  0.0027  0.0013  0.0040 
$S{T}_{2}$  0.0139  0.0029  0.0015  0.0037 
$S{B}_{2}$  0.0133  0.0029  0.0015  0.0038 
$S{P}_{3}$  0.0046  2.6850e04  7.0098e04  0.0027 
$S{L}_{3}$  0.0070  7.3503e04  7.9055e04  0.0051 
$S{T}_{3}$  0.0089  9.3807e04  9.8040e04  0.0054 
$S{B}_{3}$  0.0082  9.3521e04  9.8137e04  0.0054 
Finally, we present the graphical results for the Gentle and the Saddle functions using the set of 50 equally spaced nodes. Figures 1–2 illustrates the functions ${f}_{2}$ and ${f}_{3}$ and their corresponding polynomial Shepard interpolants $S{P}_{1},S{P}_{2}$ and $S{P}_{3}$.
References
 (1) R. Caira, F. Dell’Accio, ShepardBernoulli operators, Math. Comp. 76 (2007), 299–321.
 (2) T. Cătinaş, The combined ShepardAbelGoncharov univariate operator, J. Numer. Anal. Approx. Theory 32 (2003), 11–20.
 (3) Gh. Coman, Combined Shepard univariate operators, East J. Approx. 7 (2001), 471–483.
 (4) Gh. Coman, R. Trîmbiţaş, Univariate ShepardBirkhoff interpolation, J. Numer. Anal. Approx. Theory 30 (2001), 15–24.
 (5) Gh. Coman, R. Trîmbiţaş, Combined Shepard univariate operators, East J. Approx. 7 (2001), 471–483.
 (6) F. Dell’Accio, F. Di Tommaso, Scattered data interpolation by Shepard’s like methods: classical results and recent advances, Dolomites Res. Notes Approx. 9 (2016), 32–44.
 (7) R. Franke, Scattered data interpolation: tests of some methods, Math. Comp. 38 (1982), 181–200.
 (8) R. Franke, G. Nielson, Smooth interpolation of large sets of scattered data, Internat. J. Numer. Methods Engrg. 15 (1980), 1691–1704.
 (9) R.J. Renka, A.K. Cline, A trianglebased ${C}^{1}$ interpolation method, Rocky Mountain J. Math. 14 (1984), 223–237.
 (10) R.J. Renka, Multivariate interpolation of large sets of scattered data, ACM Trans. Math. Software 14 (1988), 139–148.
 (11) R.J. Renka, Algorithm 660: QSHEP2D: Quadratic method for bivariate interpolation of scattered data, ACM Trans. Math. Software 14 (1988), 149–150.
 (12) R.J. Renka, Algorithm 790: CSHEP2D: Cubic method for bivariate interpolation of scattered data, ACM Trans. Math. Software 25 (1999), 70–73.
 (13) D. Shepard, A two dimensional interpolation function for irregularly spaced data, In: Proceedings of 23rd National Conference ACM (1968), 517–523.
 (14) W. Thacker, J. Zhang, L. Watson, J. Birch, M. Iyer, M. Berry, Algorithm 905: SHEPPACK: Modified Shepard algorithm for interpolation of scattered multivariate data, ACM Trans. Math. Software 37 (2010), 1–20.
[1] R. Caira, F. Dell’Accio, ShepardBernoulli operators, Math. Comp. 76 (2007), 299–321.
[2] T. Cătinaș, The combined ShepardAbelGoncharov univariate operator, J. Numer. Anal. Approx. Theory 32 (2003), 11–20.
[3] Gh. Coman, Combined Shepard univariate operators, East J. Approx. 7 (2001), 471–483.
[4] Gh. Coman, R. Trîmbițaș, Univariate ShepardBirkhoff interpolation, J. Numer. Anal. Approx. Theory 30 (2001), 15–24.
[5] Gh. Coman, R. Trîmbițaș, Combined Shepard univariate operators, East J. Approx. 7 (2001), 471–483.
[6] F. Dell’Accio, F. Di Tommaso, Scattered data interpolation by Shepard’s like methods: classical results and recent advances, Dolomites Res. Notes Approx. 9 (2016), 32–44.
[7] R. Franke, Scattered data interpolation: tests of some methods, Math. Comp. 38 (1982), 181–200.
[8] R. Franke, G. Nielson, Smooth interpolation of large sets of scattered data, Internat. J. Numer. Methods Engrg. 15 (1980), 1691–1704.
[9] R.J. Renka, A.K. Cline, A trianglebased C1 interpolation method, Rocky Mountain J. Math. 14 (1984), 223–237.
[10] R.J. Renka, Multivariate interpolation of large sets of scattered data, ACM Trans. Math. Software 14 (1988), 139–148.
[11] R.J. Renka, Algorithm 660: QSHEP2D: Quadratic method for bivariate interpolation of scattered data, ACM Trans. Math. Software 14 (1988), 149–150.
[12] R.J. Renka, Algorithm 790: CSHEP2D: Cubic method for bivariate interpolation of scattered data, ACM Trans. Math. Software 25 (1999), 70–73.
[13] D. Shepard, A two dimensional interpolation function for irregularly spaced data, In: Proceedings of 23rd National Conference ACM (1968), 517–523.
[14] W. Thacker, J. Zhang, L. Watson, J. Birch, M. Iyer, M. Berry, Algorithm 905: SHEPPACK: Modified Shepard algorithm for interpolation of scattered multivariate data, ACM Trans. Math. Software 37 (2010), 1–20.