INEQUALITIES FOR SOME ITERATED LINEAR OPERATORS AND THEIR APPLICATIONS IN APPROXIMATION THEORY

. Some inequalities for the ”derivatives” of iterated linear operators will be presented, which will be applied for the investigation of degrees of approximation. Thus, with the application of the Laplacian we improve some classical results concerning the Jackson type estimate, the inverse theorem as well as the saturation phenomenon.

The Peetre K-functional defined on X p,d is given by where W k p,d is the Sobolev space given by W r p,d := {f ∈ X p,d : D α f ∈ X p,d , |α| ≤ r}.
It is known (see [1] and [4]) that K k (f, t) X p,d is equivalent to the modulus of smoothness ω k (f, t) X p,d , defined by .
Two basic results in Approximation Theory are the so-called Bernstein inequality and the Jackson inequality.Roughly speaking, in case of approximation by π n,d the Bernstein inequality tells us that the L p -norm of the derivative of a polynomial can be estimated by the product of the L p -norm of the polynomial and the degree of the corresponding polynomial.For example (see [12]) while the Jackson inequality says that the best approximation constant E p,d (f ) for f ∈ L p (T d ) in case of approximation by π n,d can be estimated if the smoothness of the given function is known.The second result was pointed out by Jackson in 1912 (see [5]) in case d = 1 and p = ∞.To mention his result let where the constant α n,r is such that ||k n,r || L 1 (T ) = 1.Let K n,r,d be the tensorproduct of the kernels k n,r , namely and let J n,r,d be the corresponding convolution operator with the kernel K n,r,d , i.e., the so-called Jackson operator.Jackson's result may be formulated as If we consider the best approximation polynomials as operators of the approximated function, then these operators are in general not linear.There are however only a few cases for which one can get the exact form of the best approximation polynomials.Thus, the linear method or the linear operator approach is a good alternative for the approximation by polynomials, in particular the linear positive method which has been attracting special interest in the past.In connection with the approximation degree for the linear method one investigates, among others, the Jackson type estimate (i.e., the upper bound of the approximation error), the inverse theorem and the saturation phenomenon.We refer to [2], [11], [12] and [14] for detailed information concerning these subjects.In [6] (see also [3] and [7]) we introduced a new technique.Using this technique we unify the above mentioned subjects for a large class of positive linear operators in one form.In this way we obtain for example the best lower and upper estimate for the Bernstein operator (see [7] and [15]) which completely characterizes the approximation behaviour of this operator.In the present paper we shall give a brief view of our investigation concerning this subject.In Section 2 we shall point out the relation between the derivative of iterated operators and the approximation behaviour of the considered operators.In Section 3 we give some applications of our technique (or the results in Section 2) for approximation by trigonometric polynomials in several variables.

ITERATION OF CONVOLUTION OPERATORS
For positive convolution operators the following results generalize the theorems in [6] (see also [17]).Let Φ ∈ X 1,d be a positive kernel.The convolution operator L defined by this Φ is We shall denote by Φ j the kernel of L j , which is given by L 1 := L and L j f := L(L j−1 f ).The key step of our technique is the so-called iteration inequality.For better understanding we shall present this inequality for the case L p (R) and give a much more natural proof.The proof is different from the one in [6].
Proof.We notice that if we can verify this inequality for p = 1 and p = +∞, then, via the Riesz-Thorin theorem (see Chapter XII [18]), this inequality holds for all 1 ≤ p ≤ +∞.Furthermore, it is enough to prove it only for p = +∞.To see this, we note that, for all As L N is again a convolution operator, the last identity implies which yields the assertion for p = 1 by the duality argument.Thus, it remains to prove our inequality only for p = +∞.To this end, let us observe that one can write Lf as Hence, ) and replacing t j + x/N, j = 1, 2, ..., N in the last formula by t j , j = 1, 2, ..., N, we obtain It is now clear that the derivative of where .
Using Cauchy's inequality we get, because of the positivity of K, .
It is clear that the first integral can be estimated by The desired assertion for the case p = +∞ follows from the last two inequalities.
Lemma 2.1 allows us to improve some well-known inverse results for the approximation by positive linear operators.Thus, we have Theorem 2.2.[9], [17].Let there exist some j ≥ 1 such that where M = R or M = T .Then for fixed k ∈ N there exists a constant C k > 0, which does not depend on , such that for all f ∈ X p,d and 1 ≤ p ≤ ∞ we have To present further results we need the concept of conjugate functions in case d = 1.It is known that the conjugate function f of f is defined in the periodic case by (cf. [18]) and for the real line R by respectively.In case 1 < p < ∞ one has (see [18]) The above inequality is not true for p = 1 and p = ∞.In fact, the following estimate for trigonometric polynomials is in general sharp: For p = 1 or p = ∞ one has (see [18]) On the other hand, it is known that for polynomials and their conjugate functions one has the so-called Szegö-inequality (see [18]): For all 1 ≤ p ≤ ∞ there holds Now let d be greater than 1, L i g be the conjugate function of g with respect to x i and Di := D i L i .In this way we define Dα = Dα 1 1 . . .Dα d d .With these notations the Szegö-inequality in multivariable version is Using our technique we get a generalization of the Szegö-inequality: and the dimension d ≥ 1 be fixed.Then there exists a positive constant C k > 0 such that for all > 0, all n and all T n ∈ π n,d one has The following result characterizes completely the quantitative approximation behavior for some positive linear operators.
Theorem 2.4.[6], [9], [10], [17].Let the condition of Theorem 2.2 be fulfilled and 1 ≤ p ≤ +∞ be fixed.If there exists some constant A > 0 such that for all g ∈ W 3 p,d with η( ) ∼ 2 , then for some C k > 0, which does not depend on , the inequalities with the Laplacian ∆ and the identity operator I.
It is known (see e.g [13]) For p = 1 or p = ∞ however one can use the approach of [13] to show that these K-functionals are not equivalent.In general (see [17] for the detail) we have for some C > 0, which does not depend on f ∈ X p,d and t, the estimate (2.5) Obviously, (2.4) unifies the so-called direct estimate, inverse theorem and saturation theorem in one form for those positive linear operators, which satisfy the conditions of Theorems 2.2 and 2.4.

APPLICATONS
In this section we shall give two applications concerning the approximation by trigonometric polynomials in several variables.We study first the Jackson operators J n,r,d with r ≥ 2. It is not hard to verify that these operators satisfy all the conditions of Theorem 2.3.Thus, the following inequalities give the final version for the approximation degree of Jackson operators Theorem 3.1.[9].For fixed d ≥ 1, r ≥ 2 and k ≥ 1 there exists a positive constant C r,d,k > 0 such that for all 1 ≤ p ≤ ∞, for all f ∈ L p (T d ) and all n ≥ 1 Our method allows us also to improve some classical estimates about the approximation by trigonometric polynomials.For d = 1 these results may be found in Chapter 7 of [2].Thus, the direct and inverse results may be formulated as Theorem 3.2.For fixed d ≥ 1, k ≥ 1 and 1 ≤ p ≤ ∞ there exists C > 0 such that the following estimates hold: Proof.The second inequality of (2.5) and the estimate in [12] (see pages 197-204 there) imply (3.4), while (3.3) is a simply consequence of (3.2).Thus, we need only to show (3.2).It is known (see page 197 of [12]) that E n (f ) p,d ≤ CK 3 (f, Assume T * n ∈ π n,d to be a best approximation of ∆f , so there exists a T n ∈ π n,d such that ∆T n = c + T * n with c = − T * n .We notice ∆f = 0. Thus, we can estimate c by |c| which gives (3.2).
For applications (e.g., in Computer Aided Geometric Design) sometimes one needs to know not only the distance between the approximation polynomial and the function which is approximated, but the uniform smoothness of this polynomial sequence in connection with this function as well.For a best approximation sequence {T * n } of f one has (see [16], [17]) The following result shows that, in fact, if the distance between the approximation polynomial and the approximated function is bounded by a K-functional, then the uniform smoothness of this sequence can always be estimated by this K-functional.
Theorem 3.3.[9].Let f ∈ L p (T d ) and the polynomial sequence {T n }, T n ∈ π n,d satisfy Then there exists C k > 0 such that for j = 1, ..., k the following estimates hold: and For d = 1 the above assertions follow from a lemma by Zamansky [2].
Let X p,d be either L p (T d ), the L p -space of periodic functions with T = [−π, π] or L p (R d ), 1 ≤ p ≤ +∞ (for p = +∞, let L ∞ (T d ) = C(T d ) and L ∞ (R d ) be the set of bounded functions in C(R d )).Denote by || • || X p,d the usual norm on X p,d .Denote further D i := ∂/∂x i and D On the other hand, as for any T n ∈ π n,d there holds E n 1 n ) p,d .Thus, (2.5) implies E n (f ) p,d ≤ Cn −2 ||∆f || p,d .