Generalization of Jensen’s and Jensen-Steffensen’s inequalities and their converses by Lidstone’s polynomial and majorization theorem
February 20, 2017.
\(^{1}\)University of Zagreb, Faculty of Architecture, Kaciceva 26, \(10000\) Zagreb, Croatia, garas@arhitekt.hr
\(^{2}\)University of Zagreb, Faculty of Textile Technology, Prilaz baruna Filipovica 28a, \(10 000\) Zagreb, Croatia, pecaric@element.hr
\(^{3}\)University of Zagreb, Faculty of Food Technology and Biotechnology, Mathematics department, Pierottijeva 6, 10000 Zagreb, Croatia, avukelic@pbf.hr
In this paper, using majorization theorems and Lidstone’s interpolating polynomials we obtain results concerning Jensen’s and Jensen-Steffensen’s inequalities and their converses in both the integral and the discrete case. We give bounds for identities related to these inequalities by using Čebyšev functionals. We also give Grüss type inequalities and Ostrowsky type inequalities for these functionals. Also we use these generalizations to construct a linear functionals and we present mean value theorems and \(n\)-exponential convexity which leads to exponential convexity and then log-convexity for these functionals. We give some families of functions which enable us to construct a large families of functions that are exponentially convex and also give Stolarsky type means with their monotonicity.
MSC. Primary 26D15, Secondary 26D07, 26A51
Keywords. Majorization, Green function, Jensen inequality, Jensen-Steffensen inequality, \((2n)\)-convex function, Lidstone polynomial, Čebyšev functional, Grüss type inequality, Ostrowsky type inequality, Cauchy type mean value theorems, \(n\)-exponential convexity, exponential convexity, log-convexity, means.
1 Introduction
Majorization makes precise the vague notion that the components of a vector \(\mathbf{x}\) are “less spread out" or “more nearly equal" than the components of a vector \(\mathbf{y}\). For fixed \(m\geq 2\) let
denote two \(m\)-tuples. Let
be their ordered components.
Majorization: (see [ 12 , p. 319 ] ) \(\mathbf{x}\) is said to majorize \(\mathbf{y}\) (or \(\mathbf{y}\) is said to be majorized by \(\mathbf{x}\)), in symbol, \(\mathbf{x}\succ \mathbf{y}\), if
holds for \(l=1,2,...,m-1\) and
Note that (1) is equivalent to
holds for \(l=1,2,...,m-1.\)
There are several equivalent characterizations of the majorization relation \(\mathbf{x}\succ \mathbf{y}\) in addition to the conditions given in definition of majorization. One is actually the answer of the question posed and answered in 1929 by Hardy, Littlewood and Polya in
[
7
]
and
[
8
]
: \(\mathbf{x}\) majorizes \(\mathbf{y}\) if
for every continuous convex function \(\phi \). Another interesting characterization of \(\mathbf{x}\succ \mathbf{y}\), also by Hardy, Littlewood and Polya in [ 7 ] and [ 8 ] , is that \(\mathbf{y}=\mathbf{Px}\) for some double stochastic matrix \(\mathbf{P}\). In fact, the previous characterization implies that the set of vectors \(\mathbf{x}\) that satisfy \(\mathbf{x}\succ \mathbf{y}\) is the convex hull spanned by the \(n!\) points formed from the permutations of the elements of \(\mathbf{x}\).
The following theorem is well-known as the majorization theorem and a convenient reference for its proof is given by Marshall and Olkin in [ 11 , p. 14 ] (see also [ 12 , p. 320 ] ):
Let \(\mathbf{x}=\left( x_{1},..., x_{m}\right) ,~ \mathbf{y}=\left( y_{1},..., y_{m}\right) \) be two \(m\)-tuples such that \(x_{i},y_{i}\in \left[ a,b\right], ~ i=1,...,m\). Then
holds for every continuous convex function \(\phi :[a,b]\rightarrow \mathbb {R}\) iff \(\mathbf{x}\succ \mathbf{y}\) holds.
The following theorem can be regarded as a generalization of Theorem 1 known as Weighted Majorization Theorem and is proved by Fuchs in [ 6 ] (see also [ 11 , p. 580 ] and [ 12 , p. 323 ] ).
Let \(\mathbf{x}=\left( x_{1},..., x_{m}\right) ,~ \mathbf{y}=\left( y_{1},..., y_{m}\right) \) be two decreasing real \(m\)-tuples with \(x_{i},y_{i}\in \left[ a,b\right] ~ ,i=1,...,m\), let \(\mathbf{w}=\left( w_{1},..., w_{m}\right) \) be a real \(m\)-tuple such that
and
Then for every continuous convex function \(\phi :[a,b]\rightarrow \mathbb {R}\), we have
Bernstein has proved that if all the even derivatives are at least \(0\) in \(\left( a,b\right) \), then \(f\) has an analytic continuation into the complex plane. Boas suggested to Widder that this might be proved by use of the Lidstone series. This seemed plausible because the Lidstone series, a generalization of the Taylor’s series, approximates a given function in the neighborhood of two points instead of one by using the even derivatives. Such series have been studied by G. J. Lidstone (1929), H. Poritsky (1932), J. M. Wittaker (1934) and others (see [ 3 ] ).
Let \(\phi \in C^{\infty }([0,1])\). Then the Lidstone series has the form
where \(\Lambda _{n}\) is a polynomial of degree \((2n+1)\) defined by the relations
Other explicit representations of the Lidstone polynomial are given by [ 2 ] and [ 14 ] ,
where \(B_{2k+4}\) is the \((2k+4)\)-th Bernoulli number and \(B_{2n+1}\left( \frac{1+t}{2}\right) \) is the Bernoulli polynomial.
In [ 15 ] , Widder proved the fundamental lemma:
If \(\phi \in C^{(2n)}([0,1])\), then
where
is the homogeneous Green’s function of the differential operator \(\frac{d^{2}}{ds^{2}}\) on \([0,1]\), and with the successive iterates of \(G(t,s)\)
The Lidstone polynomial can be expressed, in terms of \(G_{n}(t,s)\) as
Let \(\phi \) be a real-valued function defined on the segment \([a,b]\). The divided difference of order \(n\) of the function \(\phi \) at distinct points \(x_{0},\ldots ,\) \(x_{n} \in [a,b]\) is defined recursively (see [ 4 ] , [ 12 ] ) by
and
The value \(\phi [x_{0},\ldots ,x_{n}]\) is independent of the order of the points \(x_{0},\ldots ,x_{n}\).
The definition may be extended to include the case that some (or all) of the points coincide. Assuming that \(\phi ^{(j-1)}(x)\) exists, we define
The notion of \(n\)-convexity goes back to Popoviciu [ 13 ] . We follow the definition given by Karlin [ 9 ] :
A function \(\phi : [a,b]\rightarrow \mathbb {R}\) is said to be \(n\)-convex on \([a,b]\), \(n \geq 0\), if for all choices of \((n+1)\) distinct points in \([a,b],\) the \(n\)-th order divided difference of \(\phi \) satisfies
In fact, Popoviciu proved that each continuous \(n\)-convex function on \([a,b]\) is the uniform limit of the sequence of \(n\)-convex polynomials. Many related results, as well as some important inequalities due to Favard, Berwald and Steffensen can be found in [ 10 ] .
In [ 1 ] the authors proved the following majorization theorems for \((2n)\)-convex function:
Let \(n\in \mathbb {N}\), \(\mathbf{x}=\left( x_{1},...,x_{m}\right) \), \(\mathbf{y}=\left( y_{1},...,y_{m}\right) \) be two decreasing real \(m\)-tuples with \(x_{i},y_{i}\in \lbrack a,b]~ ~ (i=1,\ldots ,m)\) and let \(\mathbf{w}=\left( w_{1},...,w_{m}\right) \) be a real \(m\)-tuple which satisfies (4) and (5).
(i) If \(n\) is odd, then for every \((2n)\)-convex function \(\phi :[a,b]\rightarrow \mathbb {R}\), it holds
(ii) If \(n\) is even, then for every \((2n)\)-convex function \(\phi :[a,b]\rightarrow \mathbb {R}\), it holds
In [ 3 ] using Lidstone’s interpolating polynomials and conditions on Green’s functions, the authors present results for Jensen’s inequality and converses of Jensen’s inequality for signed measure. In this paper we give generalized results of Jensen’s and Jensen-Steffensen’s inequalities and their converses by using majorization theorem and Lidstone’s polynomial for \((2n)\)-convex functions. Then we give bounds for identities related to these inequalities by using Čebyšev functionals. We give Grüss type inequalities and Ostrowsky type inequalities for these functionals. We also use these generalizations to construct a linear functionals and we present mean value theorems and \(n\)-exponential convexity which leads to exponential convexity and then log-convexity. Finally, we present several families of functions which construct to a large families of functions that are exponentially convex. We give classes of Cauchy type means and prove their monotonicity.
2 Generalization of Jensen’s inequality
We will use the following notation for composition of functions:
Let \(n\in \mathbb {N}\), \(\mathbf{x}=\left( x_{1},...,x_{m}\right) \), and \(\mathbf{w}=\left( w_{1},...,w_{m}\right) \) be \(m\)-tuples such that \(x_{i}\in \left[ a,b\right] \), \(w_{i}\in \mathbb {R}\) \(,i=1,...,m\), \(W_{m}=\sum _{i=1}^{m}w_{i}\), \(\overline{x}=\frac{1}{W_{m}}\sum _{i=1}^{m}w_{i}x_{i}\) and \(\phi \in C^{(2n)}\left[ a,b\right] .\) Then
By Widder’s lemma we can represent every function \(\phi \in C^{(2n)}([a,b])\) in the form:
where \(\Lambda _{k}\) is a Lidstone polynomial. Using (19) we calculate \(\phi (x_{i})\) and \(\phi (\overline{x})\) and from (18) we obtain (17)
Using Theorem 7 we give generalization of Jensen’s inequality for \((2n)\)-convex function:
Let \(n\in \mathbb {N}\), \(\mathbf{x}=\left( x_{1},...,x_{m}\right) \) be decreasing real \(m\)-tuple with \(x_{i}\in \left[ a,b\right] \), \(i=1,...,m\), let \(\mathbf{w}=\left( w_{1},...,w_{m}\right) \) be positive \(m\)-tuple, \(W_{m}=\sum _{i=1}^{m}w_{i}\) and \(\overline{x}=\frac{1}{W_{m}}\sum _{i=1}^{m}w_{i}x_{i}\).
(i) If \(n\) is odd, then for every \((2n)\)-convex function \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\), it holds
Moreover, we define function \(F:[a,b]\rightarrow \mathbb {R}\), such that
If \(F\) is convex function, then the right hand side of (21) is non-negative and
(ii) If \(n\) is even, then for every \((2n)\)-convex function \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\), the reverse inequality in (21) holds.
Moreover, if \(F\) is concave function, then the reverse inequality in (23) is valid.
If \(l=k+1,...,m-1\), such that \(x_{k+1}{\lt}\overline{x}\) we have
So,
and obviously
Now, we put \(\mathbf{x}=(x_{1},\ldots ,x_{m})\) and \(\mathbf{y}=(\bar{x},\ldots ,\bar{x})\) in Theorem 7 to get inequality (21).
For inequality (23) we use fact that for convex function \(F\) we have
For \(x:[\alpha ,\beta ]\rightarrow \mathbb {R}\) continuous decreasing function, such that \(x([\alpha ,\beta ])\subseteq \lbrack a,b]\) and \(\lambda :[\alpha ,\beta ]\rightarrow \mathbb {R}\) increasing, bounded function with \(\lambda (\alpha )\neq \lambda (\beta )\) and \(\overline{x}=\frac{\int _{\alpha }^{\beta }x(t)\, d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}\), for \(x(\gamma )\geq \overline{x}\), we have:
If \(x(\gamma ){\lt}\overline{x}\) we have
Equality
obviously holds.
So, If \(n\in \mathbb {N}\) is odd, then for every \((2n)\)-convex function \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\), we obtain integral version of the inequality (21) from the above theorem
which is result proved in
[
3
]
.
Moreover, for the convex function \(F\) defined in (22) the right hand side of (29) is non-negative and
If \(n\) is even, then for every \((2n)\)-convex function \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\) the reverse inequality in (29) holds. Moreover, if \(F\) is concave function, then the reverse inequality in (30) is also valid.
Motivated by the inequalities (21) and (29), we define functionals \(\Theta _{1}(\phi )\) and \(\Theta _{2}(\phi )\) by
and
Similarly as in [ 3 ] we can construct new families of exponentially convex function and Cauchy type means by looking at these linear functionals. The monotonicity property of the generalized Cauchy means obtained via these functionals can be prove by using the properties of the linear functionals associated with this error representation, such as \(n\)-exponential and logarithmic convexity.
3 Generalization of Jensen-Steffensen’s inequality
Using majorization theorem for \((2n)\)-convex function we give generalization of Jensen-Steffensen’s inequality:
Let \(n\in \mathbb {N}\), \(\mathbf{x}=\left( x_{1},..., x_{m}\right) \) be decreasing real \(m\)-tuple with \(x_{i}\in \left[ a,b\right] \), \(i=1,...,m\), let \(\mathbf{w}=\left( w_{1},..., w_{m}\right) \) be real \(m\)-tuple such that \(0\leq W_{k} \leq W_{m}\), \(k=1,\cdots , m\), \(W_{m}{\gt}0\), where \(W_{k}=\sum _{i=1}^{k}w_{i}\) and \(\overline{x}=\frac{1}{W_{m}}\sum _{i=1}^{m}w_{i}x_{i}\).
(i) If \(n\) is odd, then for every \((2n)\)-convex function \(\phi : \left[ a,b\right] \rightarrow \mathbb {R}\), the inequality (21) holds.
Moreover, for the convex function \(F\) defined in (22) the inequality (23) is also valid.
(ii) If \(n\) is even, then for every \((2n)\)-convex function \(\phi : \left[ a,b\right] \rightarrow \mathbb {R}\), the reverse inequality in (21) holds.
Moreover, for the concave function \(F\) defined in (22) the reverse inequality in (23) is also valid.
and so we get
For \(l=k+1,...,m-1\), such that \(x_{k+1}{\lt}\overline{x}\) we have
and now
So, similarly as in Theorem 9, we get that conditions (4) and (5) for majorization are satisfied, so inequalities (21) and (23) are valid.
For \(x:[\alpha ,\beta ] \rightarrow \mathbb {R}\) continuous, decreasing function, such that \(x([\alpha ,\beta ])\subseteq [a,b]\) and \(\lambda :[\alpha ,\beta ] \rightarrow \mathbb {R}\) is either continuous or of bounded variation satisfying \(\lambda (\alpha )\leq \lambda (t)\leq \lambda (\beta )\) for all \(x\in [\alpha ,\beta ]\) and \(\overline{x}=\frac{\int _{\alpha }^{\beta } x(t) \, d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}\), for \(x(\gamma )\geq \overline{x}\), we have:
and so
If \(x(\gamma ){\lt}\overline{x}\) we have
and now
Similarly as in the Remark 10 we get that conditions for majorization are satisfied, so inequalities (29) and (30) are valid.
4 Generalization of converse of Jensen’s inequality
Let \(n\in \mathbb {N}\), \(\mathbf{x}=\left( x_{1},...,x_{r}\right) \) be real \(r\)-tuple with \(x_{i}\in \lbrack m,M]\subseteq \left[ a,b\right] \), \(i=1,...,r\), let \(\mathbf{w}=\left( w_{1},...,w_{r}\right) \) be positive \(r\)-tuple, \(W_{r}=\sum _{i=1}^{r}w_{i}\) and \(\overline{x}=\frac{1}{W_{r}}\sum _{i=1}^{r}w_{i}x_{i}\).
(i) If \(n\) is odd, then for every \((2n)\)-convex function \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\), it holds
Moreover, for the convex function \(F\) defined in (22), we have
(ii) If \(n\) is even, then for every \((2n)\)-convex function \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\), the reverse inequality in (36) holds.
Moreover, for the concave function \(F\) defined in (22) the reverse inequality in (37) is also valid.
Hence, for any odd \(n\) and \((2n)\)-convex function \(\phi \) we get (36).
For inequality (37) we use the fact that for convex function \(F\) we have
(ii) Similar to the part (i)
Let \(n\in \mathbb {N}\), \(\mathbf{x}=\left( x_{1},...,x_{r}\right) \) be real \(r\)-tuple with \(x_{i}\in \lbrack m,M]\), let \(\mathbf{w}=\left( w_{1},...,w_{r}\right) \) be positive \(r\)-tuple, \(W_{r}=\sum _{i=1}^{r}w_{i}\) and \(\overline{x}=\frac{1}{W_{r}}\sum _{i=1}^{r}w_{i}x_{i}\).
If \(n\) is odd then for every \((2n)\)-convex function \(\phi :\left[ m,M\right] \rightarrow \mathbb {R}\) it holds
If \(n\) is even, reverse inequality in (38) is valid.
For \(x:[\alpha ,\beta ] \rightarrow \mathbb {R}\) continuous function, such that \(x([\alpha ,\beta ])\subseteq [m,M]\subseteq [a,b]\) and \(\lambda :[\alpha ,\beta ] \rightarrow \mathbb {R}\) increasing, bounded function with \(\lambda (\alpha )\neq \lambda (\beta )\) and \(\overline{x}=\frac{\int _{\alpha }^{\beta } x(t) \, d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}\), similarly as in Theorem 14 we get integral version of converse of Jensen’s inequality.
For odd \(n\in \mathbb {N}\) and for every \((2n)\)-convex function \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\) we have:
which is result proved in
[
3
]
.
Moreover, for the convex function \(F\) defined in (22) we have
If \(n\) is even, then for every \((2n)\)-convex function \(\phi : \left[ a,b\right] \rightarrow \mathbb {R}\), the reverse inequality in (39) holds.
Moreover, for the concave function \(F\) defined in (22) the reverse inequality in (40) is also valid.
Motivated by the inequalities (36) and (39), we define functionals \(\Theta _{3}(\phi )\) and \(\Theta _{4}(\phi )\) by
and
Now, we can observe the same results which are mentioned in Remark 11.
5 Bounds for identities related to generalization
of majorization inequality
For two Lebesgue integrable functions \(f,h:\left[ a,b\right] \rightarrow \mathbb {R}\) we consider Čebyšev functional
In [ 5 ] , the authors proved the following theorems:
Let \(f:\left[ a,b\right] \rightarrow \mathbb {R}\) be a Lebesgue integrable function and \(h:\left[ a,b\right] \rightarrow \mathbb {R}\) be an absolutely continuous function with \((.-a)(b-.)\left[ h^{\prime }\right] ^{2}\in L\left[ a,b\right] .\) Then we have the inequality
The constant \(\frac{1}{\sqrt{2}}\) in (42) is the best possible.
Assume that \(h:\left[ a,b\right] \rightarrow \mathbb {R}\) is monotonic nondecreasing on \(\left[ a,b\right] \) and \(f:\left[ a,b\right] \rightarrow \mathbb {R}\) is absolutely continuous with \(f^{\prime }\in L_{\infty }\left[ a,b\right] .\) Then we have the inequality
The constant \(\frac{1}{2}\) in (43) is the best possible.
In the sequel we use the above theorems to obtain generalizations of the results proved in the previous sections.
For \(m\)-tuples \(\mathbf{w}=(w_{1},...,w_{m})\), \(\mathbf{x}=(x_{1},...,x_{m})\) with \(x_{i}\in \left[ a,b\right] \), \(w_{i}\in \mathbb {R}\), \(i=1,...,m\), \(\overline{x}=\frac{1}{W_{m}}\sum _{i=1}^{m}w_{i}x_{i}\) and function \(G_{n}\) as defined in (10), we denote
Similarly for \(x:\left[ \alpha ,\beta \right] \rightarrow \left[ a,b\right] \) continuous function, \(\lambda :\left[ \alpha ,\beta \right] \rightarrow \mathbb {R}\) as defined in Remark 10 or in Remark 13 and for all \(s\in \left[ a,b\right] \) denote
We have the Čebyšev functionals defined as:
Let \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\) be such that \(\phi \in C^{(2n)}\left[ a,b\right] \) for \(n\in \mathbb {N}\) with \((.-a)(b-.)\left[ \phi ^{(2n+1)}\right] ^{2}\in L\left[ a,b\right] \) and \(x_{i}\in \left[ a,b\right] \), \(w_{i}\in \mathbb {R},\) \(i=1,2,...,m\), \(\overline{x}=\tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}x_{i}\) and let the functions \(G_{n}\), \(\Upsilon \) and \(\Omega \) be defined in (10), (44) and (46). Then
where the remainder \(H_{n}^{1}(\phi ;a,b)\) satisfies the estimation
Therefore we have
where the remainder \(H_{n}^{1}(\phi ;a,b)\) satisfies the estimation (49). Now from identity (17) and fact that \(\Lambda _{n}(1-t)=\int _{0}^{1}G_{n}(t,s)(1-s)ds\) (see [ 2 ] ) we obtain (48).
Integral case of the above theorem can be given:
Let \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\) be such that \(\phi \in C^{(2n)}\left[ a,b\right] \) for \(n\in \mathbb {N}\) with \((.-a)(b-.)\left[ \phi ^{(2n+1)}\right] ^{2}\in L\left[ a,b\right] \), let \(x:\left[ \alpha ,\beta \right] \rightarrow \mathbb {R}\) continuous functions such that \(x(\left[ \alpha ,\beta \right] )\subseteq \left[ a,b\right] \) and \(\lambda :\left[ \alpha ,\beta \right] \rightarrow \mathbb {R}\) be as defined in Remark 10 or in Remark 13 and \(\overline{x}=\frac{\int _{\alpha }^{\beta }x(t)\, d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}\). Let the functions \(G_{n}\), \(\tilde{\Upsilon }\) and \(\Omega \) be defined in (10), (45) and (47). Then
where the remainder \(\tilde{H}_{n}^{1}(\phi ;a,b)\) satisfies the estimation
Using Theorem 19 we also get the following Grüss type inequality.
Let \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\) be such that \(\phi \in C^{(2n)}\left[ a,b\right] \) for \(n\in \mathbb {N}\) and \(\phi ^{(2n+1)}\geq 0\) on \(\left[ a,b\right] \) and let the function \(\Upsilon \) be defined in (44). Then we have the representation (48) and the remainder \(H_{n}^{1}(\phi ;a,b)\) satisfies the bound
Since
using the identity (17) and (52) we deduce (51).
Integral version of the above theorem can be given as:
Let \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\) be such that \(\phi \in C^{(2n)}\left[ a,b\right] \) for \(n\in \mathbb {N}\) and \(\phi ^{(2n+1)}\geq 0\) on \(\left[ a,b\right] \) and let the function \(\tilde\Upsilon \) be defined in (45). Then we have the representation (??) and the remainder \(\tilde H^{1}_{n}(\phi ;a,b)\) satisfies the bound
We also give the Ostrowsky type inequality related to the generalization of majorization inequality.
Let \(x_{i}\in \left[ a,b\right] \), \(w_{i}\in \mathbb {R},\) \(i=1,2,...,m\), \(\overline{x}=\frac{1}{W_{m}}\sum _{i=1}^{m}w_{i}x_{i}\) and let \((p,q)\) be a pair of conjugate exponents, that is \(1\leq p,q\leq \infty \) and \(\frac{1}{p}+\frac{1}{q}=1\). Let \(\phi \in C^{(2n)}\left[ a,b\right] \) be such that \(\left\vert \phi ^{(2n)}\right\vert ^{p}:\left[ a,b\right] \rightarrow \mathbb {R}\) is an R-integrable function for some \(\mathbb {N}\). Then we have
The constant on the right hand side of (53) is sharp for \(1{\lt}p\leq \infty \) and the best possible for \(p=1.\)
Using the identity (17) and applying Hölder’s inequality we obtain
For the proof of the sharpness of the constant \(\left( \int _{a}^{b}\left\vert \Psi (t)\right\vert ^{q}dt\right) ^{1/q}\) let us find a function \(\phi \) for which the equality in (53) is obtained.
For \(1{\lt}p{\lt}\infty \) take \(\phi \) to be such that
For \(p=\infty \) take \(\phi ^{(2n)}(t)=\operatorname{sgn}\Psi (t)\).
For \(p=1\) we prove that
is the best possible inequality. Suppose that \(\left\vert \Psi (t)\right\vert \) attains its maximum at \(t_{0}\in \lbrack a,b].\) First we assume that \(\Psi (t_{0}){\gt}0\). For \(\varepsilon \) small enough we define \(\phi _{\varepsilon }(t)\) by
Then for \(\varepsilon \) small enough
Now from the inequality (??) we have
Since
the statement follows. In the case \(\Psi (t_{0}){\lt}0\), we define \(\phi _{\varepsilon }(t)\) by
and the rest of the proof is the same as above.
Integral version of the above theorem can be stated as:
Let \(x:[\alpha ,\beta ]\rightarrow \mathbb {R}\) be continuous functions such that \(x([\alpha ,\beta ])\) \(\subseteq \lbrack a,b],\lambda :[\alpha ,\beta ]\rightarrow \mathbb {R}\) be as defined in Remark 10 or in Remark 13, \(\overline{x}=\frac{\int _{\alpha }^{\beta }x(t)\, d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}\) and let \((p,q)\) be a pair of conjugate exponents, that is \(1\leq p,q\leq \infty \) and \(\frac{1}{p}+\frac{1}{q}=1\). Let \(\phi \in C^{(2n)}\left[ a,b\right] \) be such that \(\left\vert \phi ^{(2n)}\right\vert ^{p}:\left[ a,b\right] \rightarrow \mathbb {R}\) is an R-integrable function for some \(n\in \mathbb {N}\). Then we have
The constant on the right hand side of (58) is sharp for \(1{\lt}p\leq \infty \) and the best possible for \(p=1.\)
The research of the authors has been fully supported by Croatian Science Foundation under the project \(5435\).
Bibliography
- 1
M. Adil Khan, N. Latif and J. Pečarić, Generalizations of majorization inequality via Lidstone’s polynomial and their applications, Commun. Math. Anal. 19 (2016) no. 2, 101–122.
- 2
R.P. Agarwal, P.J.Y. Wong, Error Inequalities in Polynomial Interpolation and Their Applications, Kluwer Academic Publishers, Dordrecht-Boston-London, 1993.
- 3
- 4
K.E. Atkinson, An Introduction to Numerical Analysis, 2nd ed., Wiley, New York, 1989.
- 5
- 6
L. Fuchs, A new proof of an inequality of Hardy-Littlewood-Pólya, Mat. Tidsskr, B (1947), 53–54, 9, 4 (2003), 607–615.
- 7
G.H. Hardy, J.E. Littlewood, G. Pólya, Some simple inequalities satisfied by convex functions, Messenger Mathematics, 58, pp. 145–152, 1929.
- 8
G.H. Hardy, J.E. Littlewood, G. Pólya, Inequalities, London and New York: Cambridge University Press, second ed., 1952.
- 9
S. Karlin, Total Positivity, Stanford Univ. Press, Stanford, 1968.
- 10
S. Karlin, W.J. Studden, Tchebycheff systems: with applications in analysis and statistics, Interscience, New York, 1966.
- 11
A.W. Marshall, I. Olkin and B.C. Arnold, Inequalities: Theory of Majorization and Its Applications (Second Edition), Springer Series in Statistics, New York, 2011.
- 12
J.E. Pečarić, F. Proschan and Y.L. Tong, Convex functions, partial orderings, and statistical applications, Mathematics in science and engineering 187, Academic Press, 1992.
- 13
- 14
J.M. Whittaker, On Lidstone series and two-point expansions of analytic functions, Proc. Lond. Math. Soc., 36 (1933–1934), 451–469.
- 15
D.V. Widder, Completely convex function and Lidstone series, Trans. Amer. Math. Soc., 51 (1942), 387–398.
- 16
D.V. Widder, The Laplace Transform, Princeton Univ. Press, New Jersey, 1941.