Generalization of Jensen’s and Jensen-Steffensen’s inequalities and their converses by Lidstone’s polynomial and majorization theorem

Gorana Aras-Gazić\(^{1}\), Josip Pečarić\(^{2}\) Ana Vukelić\(^{3}\)

February 20, 2017.

\(^{1}\)University of Zagreb, Faculty of Architecture, Kaciceva 26, \(10000\) Zagreb, Croatia, garas@arhitekt.hr

\(^{2}\)University of Zagreb, Faculty of Textile Technology, Prilaz baruna Filipovica 28a, \(10 000\) Zagreb, Croatia, pecaric@element.hr

\(^{3}\)University of Zagreb, Faculty of Food Technology and Biotechnology, Mathematics department, Pierottijeva 6, 10000 Zagreb, Croatia, avukelic@pbf.hr

In this paper, using majorization theorems and Lidstone’s interpolating polynomials we obtain results concerning Jensen’s and Jensen-Steffensen’s inequalities and their converses in both the integral and the discrete case. We give bounds for identities related to these inequalities by using Čebyšev functionals. We also give Grüss type inequalities and Ostrowsky type inequalities for these functionals. Also we use these generalizations to construct a linear functionals and we present mean value theorems and \(n\)-exponential convexity which leads to exponential convexity and then log-convexity for these functionals. We give some families of functions which enable us to construct a large families of functions that are exponentially convex and also give Stolarsky type means with their monotonicity.

MSC. Primary 26D15, Secondary 26D07, 26A51

Keywords. Majorization, Green function, Jensen inequality, Jensen-Steffensen inequality, \((2n)\)-convex function, Lidstone polynomial, Čebyšev functional, Grüss type inequality, Ostrowsky type inequality, Cauchy type mean value theorems, \(n\)-exponential convexity, exponential convexity, log-convexity, means.

1 Introduction

Majorization makes precise the vague notion that the components of a vector \(\mathbf{x}\) are “less spread out" or “more nearly equal" than the components of a vector \(\mathbf{y}\). For fixed \(m\geq 2\) let

\[ \mathbf{x}=\left( x_{1},...,x_{m}\right) ,~ \mathbf{y}=\left( y_{1},...,y_{m}\right) \]

denote two \(m\)-tuples. Let

\[ x_{[1]}\geq x_{[2]}\geq ...\geq x_{[m]},~ ~ y_{[1]}\geq y_{[2]}\geq ...\geq y_{[m]}, \]
\[ x_{(1)}\leq x_{(2)}\leq ...\leq x_{(m)},~ ~ y_{(1)}\leq y_{(2)}\leq ...\leq y_{(m)} \]

be their ordered components.

Majorization: (see [ 12 , p. 319 ] ) \(\mathbf{x}\) is said to majorize \(\mathbf{y}\) (or \(\mathbf{y}\) is said to be majorized by \(\mathbf{x}\)), in symbol, \(\mathbf{x}\succ \mathbf{y}\), if

\begin{equation} \sum _{i=1}^{l}y_{[i]}\leq \sum _{i=1}^{l}x_{[i]}\label{majorization}\end{equation}
1

holds for \(l=1,2,...,m-1\) and

\[ \sum _{i=1}^{m}x_{[i]}=\sum _{i=1}^{m}y_{[i]}. \]

Note that (1) is equivalent to

\[ \sum _{i=m-l+1}^{m}y_{(i)}\leq \sum _{i=m-l+1}^{m}x_{(i)} \]

holds for \(l=1,2,...,m-1.\)
There are several equivalent characterizations of the majorization relation \(\mathbf{x}\succ \mathbf{y}\) in addition to the conditions given in definition of majorization. One is actually the answer of the question posed and answered in 1929 by Hardy, Littlewood and Polya in [ 7 ] and [ 8 ] : \(\mathbf{x}\) majorizes \(\mathbf{y}\) if

\begin{equation} \sum _{i=1}^{m}\phi \left( y_{i}\right) \leq \sum _{i=1}^{m}\phi \left( x_{i}\right) \label{majorization1}\end{equation}
2

for every continuous convex function \(\phi \). Another interesting characterization of \(\mathbf{x}\succ \mathbf{y}\), also by Hardy, Littlewood and Polya in [ 7 ] and [ 8 ] , is that \(\mathbf{y}=\mathbf{Px}\) for some double stochastic matrix \(\mathbf{P}\). In fact, the previous characterization implies that the set of vectors \(\mathbf{x}\) that satisfy \(\mathbf{x}\succ \mathbf{y}\) is the convex hull spanned by the \(n!\) points formed from the permutations of the elements of \(\mathbf{x}\).

The following theorem is well-known as the majorization theorem and a convenient reference for its proof is given by Marshall and Olkin in [ 11 , p. 14 ] (see also [ 12 , p. 320 ] ):

Theorem 1

Let \(\mathbf{x}=\left( x_{1},..., x_{m}\right) ,~ \mathbf{y}=\left( y_{1},..., y_{m}\right) \) be two \(m\)-tuples such that \(x_{i},y_{i}\in \left[ a,b\right], ~ i=1,...,m\). Then

\begin{equation} \sum _{i=1}^{m}\phi \left( y_{i}\right) \leq \sum _{i=1}^{m}\phi \left( x_{i}\right) \label{majorization2}\end{equation}
3

holds for every continuous convex function \(\phi :[a,b]\rightarrow \mathbb {R}\) iff \(\mathbf{x}\succ \mathbf{y}\) holds.

The following theorem can be regarded as a generalization of Theorem 1 known as Weighted Majorization Theorem and is proved by Fuchs in [ 6 ] (see also [ 11 , p. 580 ] and [ 12 , p. 323 ] ).

Theorem 2

Let \(\mathbf{x}=\left( x_{1},..., x_{m}\right) ,~ \mathbf{y}=\left( y_{1},..., y_{m}\right) \) be two decreasing real \(m\)-tuples with \(x_{i},y_{i}\in \left[ a,b\right] ~ ,i=1,...,m\), let \(\mathbf{w}=\left( w_{1},..., w_{m}\right) \) be a real \(m\)-tuple such that

\begin{equation} \sum _{i=1}^{l}w_{i}y_{i}\leq \sum _{i=1}^{l}w_{i}x_{i}, ~ for~ l=1,...,m-1\label{majorization3}\end{equation}
4

and

\begin{equation} \sum _{i=1}^{m}w_{i}y_{i}= \sum _{i=1}^{m}w_{i}x_{i}.\label{majorization4}\end{equation}
5

Then for every continuous convex function \(\phi :[a,b]\rightarrow \mathbb {R}\), we have

\begin{equation} \sum _{i=1}^{m}w_{i}\phi \left( y_{i}\right) \leq \sum _{i=1}^{m}w_{i}\phi \left( x_{i}\right) .\label{majorization5}\end{equation}
6

Bernstein has proved that if all the even derivatives are at least \(0\) in \(\left( a,b\right) \), then \(f\) has an analytic continuation into the complex plane. Boas suggested to Widder that this might be proved by use of the Lidstone series. This seemed plausible because the Lidstone series, a generalization of the Taylor’s series, approximates a given function in the neighborhood of two points instead of one by using the even derivatives. Such series have been studied by G. J. Lidstone (1929), H. Poritsky (1932), J. M. Wittaker (1934) and others (see [ 3 ] ).

Definition 3

Let \(\phi \in C^{\infty }([0,1])\). Then the Lidstone series has the form

\[ \sum _{k=0}^{\infty }\left( \phi ^{(2k)}(0) \Lambda _{k}(1-x)+ \phi ^{(2k)}(1) \Lambda _{k}(x)\right) , \]

where \(\Lambda _{n}\) is a polynomial of degree \((2n+1)\) defined by the relations

\begin{align} \label{Lidstone} & \Lambda _{0}(t)=t,\nonumber \\ & \Lambda ^{\prime \prime }_{n}(t)=\Lambda _{n-1}(t),\\ & \Lambda _{n}(0)=\Lambda _{n}(1) = 0, \qquad n\geq 1.\nonumber \end{align}

Other explicit representations of the Lidstone polynomial are given by [ 2 ] and [ 14 ] ,

\begin{align*} \Lambda _{n}(t) & =(-1)^{n}\tfrac {2}{\pi ^{2n+1}}\sum _{k=1}^{\infty }\tfrac {(-1)^{k+1}}{k^{2n+1}}\sin k\pi t,\\ \Lambda _{n}(t) & =\tfrac {1}{6}\left[ \tfrac {6t^{2n+1}}{(2n+1)!}-\tfrac {t^{2n-1}}{(2n-1)!}\right] -\sum _{k=0}^{n-2}\tfrac {2(2^{2k+3}-1)}{(2k+4)!}B_{2k+4}\tfrac {t^{2n-2k-3}}{(2n-2k-3)!},\quad n=1,2,\ldots ,\\ \Lambda _{n}(t) & =\tfrac {2^{2n+1}}{(2n+1)!}B_{2n+1}\left( \tfrac {1+t}{2}\right) ,\quad n=1,2\ldots , \end{align*}

where \(B_{2k+4}\) is the \((2k+4)\)-th Bernoulli number and \(B_{2n+1}\left( \frac{1+t}{2}\right) \) is the Bernoulli polynomial.

In [ 15 ] , Widder proved the fundamental lemma:

Lemma 4

If \(\phi \in C^{(2n)}([0,1])\), then

\begin{equation} \phi (t)= \sum _{k=0}^{n-1} \left[ \phi ^{(2k)} (0) \Lambda _{k} (1-t) +\phi ^{(2k)} (1) \Lambda _{k} (t)\right] + \int _{0}^{1} G_{n} (t,s)\phi ^{(2n)} (s) ds,\label{fLid}\end{equation}
8

where

\begin{equation} \label{G1}G_{1}(t,s) = G(t,s) = \left\{ \begin{array}[c]{ll}(t-1)s, & \mbox{if $~ ~ ~ ~ ~ ~ ~ ~ ~ ~ s \leq t$,}\\ (s-1)t, & \mbox{if $~ ~ ~ ~ ~ ~ ~ ~ ~ ~ t \leq s$,} \end{array} \right. \end{equation}
9

is the homogeneous Green’s function of the differential operator \(\frac{d^{2}}{ds^{2}}\) on \([0,1]\), and with the successive iterates of \(G(t,s)\)

\begin{equation} \label{Green}G_{n}(t,s) = \int _{0}^{1}G_{1}(t,p)G_{n-1}(p,s)dp , ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ n\geq 2 . \end{equation}
10

The Lidstone polynomial can be expressed, in terms of \(G_{n}(t,s)\) as

\begin{align} \Lambda _{n} (t)=\int _{0}^{1} G_{n}(t,s) s \, ds. \end{align}

Definition 5

Let \(\phi \) be a real-valued function defined on the segment \([a,b]\). The divided difference of order \(n\) of the function \(\phi \) at distinct points \(x_{0},\ldots ,\) \(x_{n} \in [a,b]\) is defined recursively (see [ 4 ] , [ 12 ] ) by

\[ \phi [x_{i}]=\phi (x_{i}), \quad (i=0,\ldots ,n) \]

and

\[ \phi [x_{0},\ldots ,x_{n}]=\frac{\phi [x_{1},\ldots ,x_{n}]-\phi [x_{0},\ldots ,x_{n-1}]}{x_{n}-x_{0}}. \]

The value \(\phi [x_{0},\ldots ,x_{n}]\) is independent of the order of the points \(x_{0},\ldots ,x_{n}\).

The definition may be extended to include the case that some (or all) of the points coincide. Assuming that \(\phi ^{(j-1)}(x)\) exists, we define

\begin{equation} \label{hermit}\phi [\underbrace{x,\ldots ,x}_{j-times}]=\frac{\phi ^{(j-1)}(x)}{(j-1)!}. \end{equation}
12

The notion of \(n\)-convexity goes back to Popoviciu [ 13 ] . We follow the definition given by Karlin [ 9 ] :

Definition 6

A function \(\phi : [a,b]\rightarrow \mathbb {R}\) is said to be \(n\)-convex on \([a,b]\), \(n \geq 0\), if for all choices of \((n+1)\) distinct points in \([a,b],\) the \(n\)-th order divided difference of \(\phi \) satisfies

\[ \phi [x_{0},...,x_{n}] \geq 0. \]

In fact, Popoviciu proved that each continuous \(n\)-convex function on \([a,b]\) is the uniform limit of the sequence of \(n\)-convex polynomials. Many related results, as well as some important inequalities due to Favard, Berwald and Steffensen can be found in [ 10 ] .

In [ 1 ] the authors proved the following majorization theorems for \((2n)\)-convex function:

Theorem 7

Let \(n\in \mathbb {N}\), \(\mathbf{x}=\left( x_{1},...,x_{m}\right) \), \(\mathbf{y}=\left( y_{1},...,y_{m}\right) \) be two decreasing real \(m\)-tuples with \(x_{i},y_{i}\in \lbrack a,b]~ ~ (i=1,\ldots ,m)\) and let \(\mathbf{w}=\left( w_{1},...,w_{m}\right) \) be a real \(m\)-tuple which satisfies (4) and (5).

(i) If \(n\) is odd, then for every \((2n)\)-convex function \(\phi :[a,b]\rightarrow \mathbb {R}\), it holds

\begin{align} & \sum _{i=1}^{m}w_{i}\phi (x_{i})-\sum _{i=1}^{m}w_{i}\phi (y_{i})\geq \label{diskretni}\\ & \geq \sum _{k=1}^{n-1}(b-a)^{2k}\phi ^{(2k)}(a)\left[ \sum _{i=1}^{m}w_{i}\Lambda _{k}\left( \tfrac {b-x_{i}}{b-a}\right) -\sum _{i=1}^{m}w_{i}\Lambda _{k}\left( \tfrac {b-y_{i}}{b-a}\right) \right] \nonumber \\ & \quad +\sum _{k=1}^{n-1}(b-a)^{2k}\phi ^{(2k)}(b)\left[ \sum _{i=1}^{m}w_{i}\Lambda _{k}\left( \tfrac {x_{i}-a}{b-a}\right) -\sum _{i=1}^{m}w_{i}\Lambda _{k}\left( \tfrac {y_{i}-a}{b-a}\right) \right] .\nonumber \end{align}


(ii) If \(n\) is even, then for every \((2n)\)-convex function \(\phi :[a,b]\rightarrow \mathbb {R}\), it holds

\begin{align} & \sum _{i=1}^{m}w_{i}\phi (x_{i})-\sum _{i=1}^{m}w_{i}\phi (y_{i})\leq \label{diskretni2}\\ & \leq \sum _{k=1}^{n-1}(b-a)^{2k}\phi ^{(2k)}(a)\left[ \sum _{i=1}^{m}w_{i}\Lambda _{k}\left( \tfrac {b-x_{i}}{b-a}\right) -\sum _{i=1}^{m}w_{i}\Lambda _{k}\left( \tfrac {b-y_{i}}{b-a}\right) \right] \nonumber \\ & \quad +\sum _{k=1}^{n-1}(b-a)^{2k}\phi ^{(2k)}(b)\left[ \sum _{i=1}^{m}w_{i}\Lambda _{k}\left( \tfrac {x_{i}-a}{b-a}\right) -\sum _{i=1}^{m}w_{i}\Lambda _{k}\left( \tfrac {y_{i}-a}{b-a}\right) \right] .\nonumber \end{align}

In [ 3 ] using Lidstone’s interpolating polynomials and conditions on Green’s functions, the authors present results for Jensen’s inequality and converses of Jensen’s inequality for signed measure. In this paper we give generalized results of Jensen’s and Jensen-Steffensen’s inequalities and their converses by using majorization theorem and Lidstone’s polynomial for \((2n)\)-convex functions. Then we give bounds for identities related to these inequalities by using Čebyšev functionals. We give Grüss type inequalities and Ostrowsky type inequalities for these functionals. We also use these generalizations to construct a linear functionals and we present mean value theorems and \(n\)-exponential convexity which leads to exponential convexity and then log-convexity. Finally, we present several families of functions which construct to a large families of functions that are exponentially convex. We give classes of Cauchy type means and prove their monotonicity.

2 Generalization of Jensen’s inequality

We will use the following notation for composition of functions:

\begin{equation} \Lambda _{k}\left( \tfrac {x-a}{b-a}\right) =\tilde{\Lambda }_{k}(x), \quad x\in [a,b], \, k=0,1,\ldots ,n-1, \end{equation}
15

\begin{equation} \Lambda _{k}\left( \tfrac {b-x}{b-a}\right) =\hat{\Lambda }_{k}(x), \quad x\in [a,b],\, k=0,1,\ldots ,n-1. \end{equation}
16

Theorem 8

Let \(n\in \mathbb {N}\), \(\mathbf{x}=\left( x_{1},...,x_{m}\right) \), and \(\mathbf{w}=\left( w_{1},...,w_{m}\right) \) be \(m\)-tuples such that \(x_{i}\in \left[ a,b\right] \), \(w_{i}\in \mathbb {R}\) \(,i=1,...,m\), \(W_{m}=\sum _{i=1}^{m}w_{i}\), \(\overline{x}=\frac{1}{W_{m}}\sum _{i=1}^{m}w_{i}x_{i}\) and \(\phi \in C^{(2n)}\left[ a,b\right] .\) Then

\begin{align} & \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\phi (x_{i})-\phi (\overline{x})=\label{Jvazno}\\ & =\sum _{k=0}^{n-1}\phi ^{(2k)}(a)(b-a)^{2k}\left[ \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\hat{\Lambda }_{k}(x_{i})-\hat{\Lambda }_{k}(\overline{x})\right] \nonumber \\ & \quad +\sum _{k=0}^{n-1}\phi ^{(2k)}(b)(b-a)^{2k}\left[ \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\tilde{\Lambda }_{k}(x_{i})-\tilde{\Lambda }_{k}(\overline{x})\right] \nonumber \\ & \quad +(b-a)^{2n-1}\int _{a}^{b}\left[ \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}G_{n}\left( \tfrac {x_{i}-a}{b-a},\tfrac {t-a}{b-a}\right) -G_{n}\left( \tfrac {\overline{x}-a}{b-a},\tfrac {t-a}{b-a}\right) \right] \phi ^{(2n)}(t)\, dt.\nonumber \end{align}

Proof â–¼
Consider

\begin{equation} \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\phi (x_{i})-\phi (\overline{x}).\label{difference}\end{equation}
18

By Widder’s lemma we can represent every function \(\phi \in C^{(2n)}([a,b])\) in the form:

\begin{align} \phi (x) & =\sum _{k=0}^{n-1}(b-a)^{2k}\left[ \phi ^{(2k)}(a)\hat{\Lambda }_{k}(x)+\phi ^{(2k)}(b)\tilde{\Lambda }_{k}(x)\right] \label{Widder}\\ & \quad +(b-a)^{2n-1}\int _{a}^{b}G_{n}\left( \tfrac {x-a}{b-a},\tfrac {t-a}{b-a}\right) \phi ^{(2n)}(t)dt,\label{fLid2n}\end{align}

where \(\Lambda _{k}\) is a Lidstone polynomial. Using (19) we calculate \(\phi (x_{i})\) and \(\phi (\overline{x})\) and from (18) we obtain (17)

Proof â–¼

Using Theorem 7 we give generalization of Jensen’s inequality for \((2n)\)-convex function:

Theorem 9

Let \(n\in \mathbb {N}\), \(\mathbf{x}=\left( x_{1},...,x_{m}\right) \) be decreasing real \(m\)-tuple with \(x_{i}\in \left[ a,b\right] \), \(i=1,...,m\), let \(\mathbf{w}=\left( w_{1},...,w_{m}\right) \) be positive \(m\)-tuple, \(W_{m}=\sum _{i=1}^{m}w_{i}\) and \(\overline{x}=\frac{1}{W_{m}}\sum _{i=1}^{m}w_{i}x_{i}\).

(i)   If \(n\) is odd, then for every \((2n)\)-convex function \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\), it holds

\begin{align} & \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\phi (x_{i})-\phi (\overline{x})\geq \nonumber \label{Jvazno1}\\ & \geq \sum _{k=1}^{n-1}\phi ^{(2k)}(a)(b-a)^{2k}\left[ \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\hat{\Lambda }_{k}(x_{i})-\hat{\Lambda }_{k}(\overline{x})\right] \nonumber \\ & \quad +\sum _{k=1}^{n-1}\phi ^{(2k)}(b)(b-a)^{2k}\left[ \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\tilde{\Lambda }_{k}(x_{i})-\tilde{\Lambda }_{k}(\overline{x})\right] . \end{align}

Moreover, we define function \(F:[a,b]\rightarrow \mathbb {R}\), such that

\begin{equation} F(x)=\sum _{k=1}^{n-1}(b-a)^{2k}\left[ \phi ^{(2k)}(a)\hat{\Lambda }_{k}(x)+\phi ^{(2k)}(b)\tilde{\Lambda }_{k}(x)\right] .\label{F}\end{equation}
22

If \(F\) is convex function, then the right hand side of (21) is non-negative and

\begin{equation} \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\phi (x_{i})-\phi (\overline{x})\geq 0.\label{jenmaj}\end{equation}
23


(ii)   If \(n\) is even, then for every \((2n)\)-convex function \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\), the reverse inequality in (21) holds.
Moreover, if \(F\) is concave function, then the reverse inequality in (23) is valid.

Proof â–¼
For \(l=1,...,k\), such that \(x_{k}\geq \overline{x}\) we get
\[ \sum _{i=1}^{l}w_{i}\overline{x}\leq \sum _{i=1}^{l}w_{i}x_{i}. \]

If \(l=k+1,...,m-1\), such that \(x_{k+1}{\lt}\overline{x}\) we have

\[ \label{major2}\sum _{i=1}^{l}w_{i}x_{i}=\sum _{i=1}^{m}w_{i}x_{i}-\sum _{i=l+1}^{m}w_{i}x_{i}{\gt}\sum _{i=1}^{m}w_{i}\overline{x}-\sum _{i=l+1}^{m}w_{i}\overline{x}=\sum _{i=1}^{l}w_{i}\overline{x}. \]

So,

\begin{equation} \label{major1}\sum _{i=1}^{l}w_{i}\overline{x}\leq \sum _{i=1}^{l}w_{i}x_{i}\text{ for all }l=1,\ldots ,m-1 \end{equation}
24

and obviously

\begin{equation} \label{major2}\sum _{i=1}^{m}w_{i}\overline{x}=\sum _{i=1}^{m}w_{i}x_{i}. \end{equation}
25

Now, we put \(\mathbf{x}=(x_{1},\ldots ,x_{m})\) and \(\mathbf{y}=(\bar{x},\ldots ,\bar{x})\) in Theorem 7 to get inequality (21).

For inequality (23) we use fact that for convex function \(F\) we have

\[ \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}F(x_{i})-F(\bar{x})\geq 0. \]

Remark 10

For \(x:[\alpha ,\beta ]\rightarrow \mathbb {R}\) continuous decreasing function, such that \(x([\alpha ,\beta ])\subseteq \lbrack a,b]\) and \(\lambda :[\alpha ,\beta ]\rightarrow \mathbb {R}\) increasing, bounded function with \(\lambda (\alpha )\neq \lambda (\beta )\) and \(\overline{x}=\frac{\int _{\alpha }^{\beta }x(t)\, d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}\), for \(x(\gamma )\geq \overline{x}\), we have:

\begin{equation} \int _{\alpha }^{\gamma }x(t)\, d\lambda (t)\geq \int _{\alpha }^{\gamma }x(\gamma )\, d\lambda (t)\geq \int _{\alpha }^{\gamma }\overline{x}\, d\lambda (t), \qquad \gamma \in \left[ \alpha ,\beta \right] .\label{majorI}\end{equation}
26

If \(x(\gamma ){\lt}\overline{x}\) we have

\begin{align} \int _{\alpha }^{\gamma }x(t)\, d\lambda (t) & =\int _{\alpha }^{\beta }x(t)\, d\lambda (t)-\int _{\gamma }^{\beta }x(t)\, d\lambda (t)\label{majorI1}\\ & {\gt}\int _{\alpha }^{\beta }\overline{x}\, d\lambda (t)-\int _{\gamma }^{\beta }\overline{x}\, d\lambda (t)=\int _{\alpha }^{\gamma }\overline{x}\, d\lambda (t), \qquad \gamma \in \left[ \alpha ,\beta \right] .\nonumber \end{align}

Equality

\begin{equation} \int _{\alpha }^{\beta }x(t)\, d\lambda (t)=\int _{\alpha }^{\beta }\overline{x}d\lambda (t) \end{equation}
28

obviously holds.

So, If \(n\in \mathbb {N}\) is odd, then for every \((2n)\)-convex function \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\), we obtain integral version of the inequality (21) from the above theorem

\begin{align} & \frac{\int _{\alpha }^{\beta }\phi \left( x(t)\right) \, d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}-\phi (\overline{x})\geq \nonumber \label{JvaznoI}\\ & \geq \sum _{k=1}^{n-1}\phi ^{(2k)}(a)(b-a)^{2k}\left[ \frac{\int _{\alpha }^{\beta }\hat{\Lambda }_{k}(x(t))d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}-\hat{\Lambda }_{k}(\overline{x})\right] \nonumber \\ & \quad +\sum _{k=1}^{n-1}\phi ^{(2k)}(b)(b-a)^{2k}\left[ \frac{\int _{\alpha }^{\beta }\tilde{\Lambda }_{k}(x(t))d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}-\tilde{\Lambda }_{k}(\overline{x})\right] , \end{align}

which is result proved in [ 3 ] .
Moreover, for the convex function \(F\) defined in (22) the right hand side of (29) is non-negative and

\begin{equation} \frac{\int _{\alpha }^{\beta }\phi \left( x(t)\right) \, d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}-\phi (\overline{x})\geq 0.\label{jenmaj1}\end{equation}
30

If \(n\) is even, then for every \((2n)\)-convex function \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\) the reverse inequality in (29) holds. Moreover, if \(F\) is concave function, then the reverse inequality in (30) is also valid.

Remark 11

Motivated by the inequalities (21) and (29), we define functionals \(\Theta _{1}(\phi )\) and \(\Theta _{2}(\phi )\) by

\begin{align} \Theta _{1}(\phi ) & =\tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\phi (x_{i})-\phi (\overline{x})\label{funkcional1}\\ & \quad -\sum _{k=1}^{n-1}\phi ^{(2k)}(a)(b-a)^{2k}\left[ \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\hat{\Lambda }_{k}(x_{i})-\hat{\Lambda }_{k}(\overline{x})\right] \nonumber \\ & \quad -\sum _{k=1}^{n-1}\phi ^{(2k)}(b)(b-a)^{2k}\left[ \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\tilde{\Lambda }_{k}(x_{i})-\tilde{\Lambda }_{k}(\overline{x})\right] \nonumber \end{align}

and

\begin{align} \Theta _{2}(\phi ) & =\frac{\int _{\alpha }^{\beta }\phi \left( x(t)\right) \, d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}-\phi (\overline{x})\label{funkcional2}\\ & \quad -\sum _{k=1}^{n-1}\phi ^{(2k)}(a)(b-a)^{2k}\left[ \frac{\int _{\alpha }^{\beta }\hat{\Lambda }_{k}(x(t))d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}-\hat{\Lambda }_{k}(\overline{x})\right] \nonumber \\ & \quad -\sum _{k=1}^{n-1}\phi ^{(2k)}(b)(b-a)^{2k}\left[ \frac{\int _{\alpha }^{\beta }\tilde{\Lambda }_{k}(x(t))d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}-\tilde{\Lambda }_{k}(\overline{x})\right] ,\nonumber \end{align}

Similarly as in [ 3 ] we can construct new families of exponentially convex function and Cauchy type means by looking at these linear functionals. The monotonicity property of the generalized Cauchy means obtained via these functionals can be prove by using the properties of the linear functionals associated with this error representation, such as \(n\)-exponential and logarithmic convexity.

3 Generalization of Jensen-Steffensen’s inequality

Using majorization theorem for \((2n)\)-convex function we give generalization of Jensen-Steffensen’s inequality:

Theorem 12

Let \(n\in \mathbb {N}\), \(\mathbf{x}=\left( x_{1},..., x_{m}\right) \) be decreasing real \(m\)-tuple with \(x_{i}\in \left[ a,b\right] \), \(i=1,...,m\), let \(\mathbf{w}=\left( w_{1},..., w_{m}\right) \) be real \(m\)-tuple such that \(0\leq W_{k} \leq W_{m}\), \(k=1,\cdots , m\), \(W_{m}{\gt}0\), where \(W_{k}=\sum _{i=1}^{k}w_{i}\) and \(\overline{x}=\frac{1}{W_{m}}\sum _{i=1}^{m}w_{i}x_{i}\).

(i)   If \(n\) is odd, then for every \((2n)\)-convex function \(\phi : \left[ a,b\right] \rightarrow \mathbb {R}\), the inequality (21) holds.
Moreover, for the convex function \(F\) defined in (22) the inequality (23) is also valid.

(ii)   If \(n\) is even, then for every \((2n)\)-convex function \(\phi : \left[ a,b\right] \rightarrow \mathbb {R}\), the reverse inequality in (21) holds.
Moreover, for the concave function \(F\) defined in (22) the reverse inequality in (23) is also valid.

Proof â–¼
For \(l=1,...,k\), such that \(x_{k}\geq \overline{x}\) we have

\begin{equation} \label{majorJS1}\sum _{i=1}^{l}w_{i}x_{i}-W_{l} x_{l}=\sum _{i=1}^{l-1}(x_{i}-x_{i+1})W_{i}\geq 0 \end{equation}
33

and so we get

\[ \sum _{i=1}^{l}w_{i}\overline{x}=W_{l}\overline{x}\leq W_{l}x_{l}\leq \sum _{i=1}^{l}w_{i}x_{i}. \]

For \(l=k+1,...,m-1\), such that \(x_{k+1}{\lt}\overline{x}\) we have

\begin{equation} \label{majorJS2}x_{l}\left( W_{m}-W_{l}\right) -\sum _{i=l+1}^{m}w_{i}x_{i}=\sum _{i=l+1}^{m}(x_{i-1}-x_{i})(W_{m}-W_{i-1})\geq 0 \end{equation}
34

and now

\begin{equation} \label{majorJS3}\sum _{i=l+1}^{m}w_{i}\overline{x}=\left( W_{m}-W_{l}\right) \overline{x}>\left( W_{m}-W_{l}\right) x_{l}\geq \sum _{i=l+1}^{m}w_{i}x_{i}. \end{equation}
35

So, similarly as in Theorem 9, we get that conditions (4) and (5) for majorization are satisfied, so inequalities (21) and (23) are valid.

Proof â–¼

Remark 13

For \(x:[\alpha ,\beta ] \rightarrow \mathbb {R}\) continuous, decreasing function, such that \(x([\alpha ,\beta ])\subseteq [a,b]\) and \(\lambda :[\alpha ,\beta ] \rightarrow \mathbb {R}\) is either continuous or of bounded variation satisfying \(\lambda (\alpha )\leq \lambda (t)\leq \lambda (\beta )\) for all \(x\in [\alpha ,\beta ]\) and \(\overline{x}=\frac{\int _{\alpha }^{\beta } x(t) \, d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}\), for \(x(\gamma )\geq \overline{x}\), we have:

\[ \int _{\alpha }^{\gamma }x(t)d\lambda (t)-x(\gamma )\int _{\alpha }^{\gamma }d\lambda (t)=-\int _{\alpha }^{\gamma }x^{\prime }(t)\left( \int _{\alpha }^{t}d\lambda (x)\right) dt\geq 0 \]

and so

\[ \overline{x}\int _{\alpha }^{\gamma }d\lambda (t)\leq x(\gamma )\int _{\alpha }^{\gamma }d\lambda (t)\leq \int _{\alpha }^{\gamma }x(t)d\lambda (t). \]

If \(x(\gamma ){\lt}\overline{x}\) we have

\[ x(\gamma )\int _{\gamma }^{\beta }d\lambda (t)-\int _{\gamma }^{\beta }x(t)d\lambda (t)=-\int _{\gamma }^{\beta }x^{\prime }(t)\left( \int _{t}^{\beta }d\lambda (x)\right) dt\geq 0 \]

and now

\[ \overline{x}\int _{\gamma }^{\beta }d\lambda (t){\gt} x(\gamma )\int _{\gamma }^{\beta }d\lambda (t)\geq \int _{\gamma }^{\beta }x(t)d\lambda (t). \]

Similarly as in the Remark 10 we get that conditions for majorization are satisfied, so inequalities (29) and (30) are valid.

4 Generalization of converse of Jensen’s inequality

Theorem 14

Let \(n\in \mathbb {N}\), \(\mathbf{x}=\left( x_{1},...,x_{r}\right) \) be real \(r\)-tuple with \(x_{i}\in \lbrack m,M]\subseteq \left[ a,b\right] \), \(i=1,...,r\), let \(\mathbf{w}=\left( w_{1},...,w_{r}\right) \) be positive \(r\)-tuple, \(W_{r}=\sum _{i=1}^{r}w_{i}\) and \(\overline{x}=\frac{1}{W_{r}}\sum _{i=1}^{r}w_{i}x_{i}\).

(i) If \(n\) is odd, then for every \((2n)\)-convex function \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\), it holds

\begin{align} & \tfrac {1}{W_{r}}\sum _{i=1}^{r}w_{i}\phi (x_{i})\leq \label{reverseJ} \\ & \leq \tfrac {\overline{x}-m}{M-m}\phi \left( M\right) +\tfrac {M-\overline{x}}{M-m}\phi \left( m\right) \nonumber \\ & \quad -\sum _{k=1}^{n-1}\phi ^{(2k)}(a)(b-a)^{2k}\left[ \tfrac {\overline{x}-m}{M-m}\hat{\Lambda }_{k}\left( M\right) +\tfrac {M-\overline{x}}{M-m}\hat{\Lambda }_{k}\left( m\right) -\tfrac {1}{W_{r}}\sum _{i=1}^{r}w_{i}\hat{\Lambda }_{k}(x_{i})\right] \nonumber \\ & \quad -\sum _{k=1}^{n-1}\phi ^{(2k)}(b)(b-a)^{2k}\left[ \tfrac {\overline{x}-m}{M-m}\tilde{\Lambda }_{k}\left( M\right) +\tfrac {M-\overline{x}}{M-m}\tilde{\Lambda }_{k}\left( m\right) -\tfrac {1}{W_{r}}\sum _{i=1}^{r}w_{i}\tilde{\Lambda }_{k}(x_{i})\right] .\nonumber \end{align}

Moreover, for the convex function \(F\) defined in (22), we have

\begin{equation} \tfrac {1}{W_{r}}\sum _{i=1}^{r}w_{i}\phi (x_{i})\leq \tfrac {\overline{x}-m}{M-m}\phi \left( M\right) +\tfrac {M-\overline{x}}{M-m}\phi \left( m\right) .\label{jenmajcon}\end{equation}
37


(ii) If \(n\) is even, then for every \((2n)\)-convex function \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\), the reverse inequality in (36) holds.
Moreover, for the concave function \(F\) defined in (22) the reverse inequality in (37) is also valid.

Proof â–¼
Using inequality (21) we have
\begin{align*} & \tfrac {1}{W_{r}}\sum _{i=1}^{r}w_{i}\phi (x_{i}) = \\ & =\tfrac {1}{W_{r}}\sum _{i=1}^{r}w_{i}\phi \left( \tfrac {x_{i}-m}{M-m}M+\tfrac {M-x_{i}}{M-m}m\right) \\ & \leq \tfrac {\overline{x}-m}{M-m}\phi \left( M\right) +\tfrac {M-\overline{x}}{M-m}\phi \left( m\right) \\ & \quad -\sum _{k=1}^{n-1}\phi ^{(2k)}(a)(b-a)^{2k}\left[ \tfrac {\overline{x}-m}{M-m}\hat{\Lambda }_{k}\left( M\right) +\tfrac {M-\overline{x}}{M-m}\hat{\Lambda }_{k}\left( m\right) -\tfrac {1}{W_{r}}\sum _{i=1}^{r}w_{i}\hat{\Lambda }_{k}(x_{i})\right] \\ & \quad -\sum _{k=1}^{n-1}\phi ^{(2k)}(b)(b-a)^{2k}\left[ \tfrac {\overline{x}-m}{M-m}\tilde{\Lambda }_{k}\left( M\right) +\tfrac {M-\overline{x}}{M-m}\tilde{\Lambda }_{k}\left( m\right) -\tfrac {1}{W_{r}}\sum _{i=1}^{r}w_{i}\tilde{\Lambda }_{k}(x_{i})\right] . \end{align*}

Hence, for any odd \(n\) and \((2n)\)-convex function \(\phi \) we get (36).

For inequality (37) we use the fact that for convex function \(F\) we have

\[ \tfrac {1}{W_{r}}\sum _{i=1}^{r}w_{i}F(x_{i})\leq \tfrac {\overline{x}-m}{M-m}F\left( M\right) +\tfrac {M-\overline{x}}{M-m}F\left( m\right) . \]

(ii) Similar to the part (i)

Proof â–¼

Corollary 15

Let \(n\in \mathbb {N}\), \(\mathbf{x}=\left( x_{1},...,x_{r}\right) \) be real \(r\)-tuple with \(x_{i}\in \lbrack m,M]\), let \(\mathbf{w}=\left( w_{1},...,w_{r}\right) \) be positive \(r\)-tuple, \(W_{r}=\sum _{i=1}^{r}w_{i}\) and \(\overline{x}=\frac{1}{W_{r}}\sum _{i=1}^{r}w_{i}x_{i}\).
If \(n\) is odd then for every \((2n)\)-convex function \(\phi :\left[ m,M\right] \rightarrow \mathbb {R}\) it holds

\begin{equation} \label{f38} \sum _{i=1}^{r}w_{i}\phi (x_{i})\leq \sum _{k=0}^{n-1}(M-m)^{2k}\sum _{i=1}^{r}w_{i}\left[ \phi ^{(2k)}(m)\hat{\Lambda }_{k}(x_{i})+\phi ^{(2k)}(M)\tilde{\Lambda }_{k}(x_{i})\right] . \end{equation}
38

If \(n\) is even, reverse inequality in (38) is valid.

Proof â–¼
We use inequality (36) for \(m=a\) and \(M=b\) and (7).
Proof â–¼

Remark 16

For \(x:[\alpha ,\beta ] \rightarrow \mathbb {R}\) continuous function, such that \(x([\alpha ,\beta ])\subseteq [m,M]\subseteq [a,b]\) and \(\lambda :[\alpha ,\beta ] \rightarrow \mathbb {R}\) increasing, bounded function with \(\lambda (\alpha )\neq \lambda (\beta )\) and \(\overline{x}=\frac{\int _{\alpha }^{\beta } x(t) \, d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}\), similarly as in Theorem 14 we get integral version of converse of Jensen’s inequality.

For odd \(n\in \mathbb {N}\) and for every \((2n)\)-convex function \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\) we have:

\begin{align} & \frac{\int _{\alpha }^{\beta }\phi (x(t))d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}\leq \label{reverseJI}\\ & \leq \tfrac {\overline{x}-m}{M-m}\phi \left( M\right) +\tfrac {M-\overline{x}}{M-m}\phi \left( m\right) \nonumber \\ & \quad -\sum _{k=1}^{n-1}\phi ^{(2k)}(a)(b-a)^{2k}\left[ \tfrac {\overline{x}-m}{M-m}\hat{\Lambda }_{k}\left( M\right) +\tfrac {M-\overline{x}}{M-m}\hat{\Lambda }_{k}\left( m\right) -\frac{\int _{\alpha }^{\beta }\hat{\Lambda }_{k}(x(t))d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}\right] \nonumber \\ & \quad -\sum _{k=1}^{n-1}\phi ^{(2k)}(b)(b-a)^{2k}\left[ \tfrac {\overline{x}-m}{M-m}\tilde{\Lambda }_{k}\left( M\right) +\tfrac {M-\overline{x}}{M-m}\tilde{\Lambda }_{k}\left( m\right) -\frac{\int _{\alpha }^{\beta }\tilde{\Lambda }_{k}(x(t))d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}\right] ,\nonumber \end{align}

which is result proved in [ 3 ] .
Moreover, for the convex function \(F\) defined in (22) we have

\begin{equation} \frac{\int _{\alpha }^{\beta }\phi (x(t))d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}\leq \tfrac {\overline{x}-m}{M-m}\phi \left( M\right) +\tfrac {M-\overline{x}}{M-m}\phi \left( m\right) .\label{jenmajconI}\end{equation}
40

If \(n\) is even, then for every \((2n)\)-convex function \(\phi : \left[ a,b\right] \rightarrow \mathbb {R}\), the reverse inequality in (39) holds.
Moreover, for the concave function \(F\) defined in (22) the reverse inequality in (40) is also valid.

Remark 17

Motivated by the inequalities (36) and (39), we define functionals \(\Theta _{3}(\phi )\) and \(\Theta _{4}(\phi )\) by

\begin{align*} & \Theta _{3}(\phi ) = \\ & =\tfrac {1}{W_{r}}\sum _{i=1}^{r}w_{i}\phi (x_{i})-\tfrac {\overline{x}-m}{M-m}\phi \left( M\right) -\tfrac {M-\overline{x}}{M-m}\phi \left( m\right) \\ & \quad +\sum _{k=1}^{n-1}\phi ^{(2k)}(a)(b-a)^{2k}\left[ \tfrac {\overline{x}-m}{M-m}\hat{\Lambda }_{k}\left( M\right) +\tfrac {M-\overline{x}}{M-m}\hat{\Lambda }_{k}\left( m\right) -\tfrac {1}{W_{r}}\sum _{i=1}^{r}w_{i}\hat{\Lambda }_{k}(x_{i})\right] \\ & \quad +\sum _{k=1}^{n-1}\phi ^{(2k)}(b)(b-a)^{2k}\left[ \tfrac {\overline{x}-m}{M-m}\tilde{\Lambda }_{k}\left( M\right) +\tfrac {M-\overline{x}}{M-m}\tilde{\Lambda }_{k}\left( m\right) -\tfrac {1}{W_{r}}\sum _{i=1}^{r}w_{i}\tilde{\Lambda }_{k}(x_{i})\right] , \end{align*}

and

\begin{align*} & \Theta _{4}(\phi ) = \\ & =\frac{\int _{\alpha }^{\beta }\phi (x(t))d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}-\tfrac {\overline{x}-m}{M-m}\phi \left( M\right) -\tfrac {M-\overline{x}}{M-m}\phi \left( m\right) \\ & \quad +\sum _{k=1}^{n-1}\phi ^{(2k)}(a)(b-a)^{2k}\left[ \tfrac {\overline{x}-m}{M-m}\hat{\Lambda }_{k}\left( M\right) +\tfrac {M-\overline{x}}{M-m}\hat{\Lambda }_{k}\left( m\right) -\frac{\int _{\alpha }^{\beta }\hat{\Lambda }_{k}(x(t))d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}\right] \\ & \quad +\sum _{k=1}^{n-1}\phi ^{(2k)}(b)(b-a)^{2k}\left[ \tfrac {\overline{x}-m}{M-m}\tilde{\Lambda }_{k}\left( M\right) +\tfrac {M-\overline{x}}{M-m}\tilde{\Lambda }_{k}\left( m\right) -\frac{\int _{\alpha }^{\beta }\tilde{\Lambda }_{k}(x(t))d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}\right] . \end{align*}

Now, we can observe the same results which are mentioned in Remark 11.

5 Bounds for identities related to generalization
of majorization inequality

For two Lebesgue integrable functions \(f,h:\left[ a,b\right] \rightarrow \mathbb {R}\) we consider Čebyšev functional

\begin{align} \label{funkcionalC}\Omega (f,h)=\tfrac {1}{b-a}\int _{a}^{b}f(t)h(t)dt-\tfrac {1}{b-a}\int _{a}^{b}f(t)dt\tfrac {1}{b-a}\int _{a}^{b}h(t)dt. \end{align}

In [ 5 ] , the authors proved the following theorems:

Theorem 18

Let \(f:\left[ a,b\right] \rightarrow \mathbb {R}\) be a Lebesgue integrable function and \(h:\left[ a,b\right] \rightarrow \mathbb {R}\) be an absolutely continuous function with \((.-a)(b-.)\left[ h^{\prime }\right] ^{2}\in L\left[ a,b\right] .\) Then we have the inequality

\begin{align} \label{ceroneineq1}\mid \Omega (f,h)\mid \leq \tfrac {\left[ \Omega (f,f)\right] ^{\frac{1}{2}}}{\sqrt{2}}\tfrac {1}{\sqrt{b-a}}\left( \int _{a}^{b}(x-a)(b-x)\left[ h^{\prime }(x)\right] ^{2}dx\right) ^{\frac{1}{2}}. \end{align}

The constant \(\frac{1}{\sqrt{2}}\) in (42) is the best possible.

Theorem 19

Assume that \(h:\left[ a,b\right] \rightarrow \mathbb {R}\) is monotonic nondecreasing on \(\left[ a,b\right] \) and \(f:\left[ a,b\right] \rightarrow \mathbb {R}\) is absolutely continuous with \(f^{\prime }\in L_{\infty }\left[ a,b\right] .\) Then we have the inequality

\begin{align} \label{ceroneineq2}\mid \Omega (f,h)\mid \leq \tfrac {1}{2(b-a)}\parallel f^{\prime }\parallel _{\infty }\int _{a}^{b}(x-a)(b-x)dh(x). \end{align}

The constant \(\frac{1}{2}\) in (43) is the best possible.

In the sequel we use the above theorems to obtain generalizations of the results proved in the previous sections.

For \(m\)-tuples \(\mathbf{w}=(w_{1},...,w_{m})\), \(\mathbf{x}=(x_{1},...,x_{m})\) with \(x_{i}\in \left[ a,b\right] \), \(w_{i}\in \mathbb {R}\), \(i=1,...,m\), \(\overline{x}=\frac{1}{W_{m}}\sum _{i=1}^{m}w_{i}x_{i}\) and function \(G_{n}\) as defined in (10), we denote

\begin{align} \label{function1}\Upsilon (t)=\tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}G_{n}\left( \tfrac {x_{i}-a}{b-a},\tfrac {t-a}{b-a}\right) -G_{n}\left( \tfrac {\overline{x}-a}{b-a},\tfrac {t-a}{b-a}\right) . \end{align}

Similarly for \(x:\left[ \alpha ,\beta \right] \rightarrow \left[ a,b\right] \) continuous function, \(\lambda :\left[ \alpha ,\beta \right] \rightarrow \mathbb {R}\) as defined in Remark 10 or in Remark 13 and for all \(s\in \left[ a,b\right] \) denote

\begin{align} \label{function2}\tilde\Upsilon (s)=\frac{\int _{\alpha }^{\beta }G_{n}\left( \frac{x(t)-a}{b-a},\frac{s-a}{b-a}\right) d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}-G_{n}\left( \tfrac {\overline{x}-a}{b-a},\tfrac {s-a}{b-a}\right) . \end{align}

We have the Čebyšev functionals defined as:

\begin{align} \label{funkcionalC1}\Omega (\Upsilon ,\Upsilon )=\tfrac {1}{b-a}\int _{a}^{b}\Upsilon ^{2}(t)dt-\left( \tfrac {1}{b-a}\int _{a}^{b}\Upsilon (t)dt\right) ^{2}, \\ \label{funkcionalC2}\Omega (\tilde\Upsilon ,\tilde\Upsilon )=\tfrac {1}{b-a}\int _{a}^{b}\tilde\Upsilon ^{2}(s)ds-\left( \tfrac {1}{b-a}\int _{a}^{b}\tilde\Upsilon (s)ds\right) ^{2}. \end{align}

Theorem 20

Let \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\) be such that \(\phi \in C^{(2n)}\left[ a,b\right] \) for \(n\in \mathbb {N}\) with \((.-a)(b-.)\left[ \phi ^{(2n+1)}\right] ^{2}\in L\left[ a,b\right] \) and \(x_{i}\in \left[ a,b\right] \), \(w_{i}\in \mathbb {R},\) \(i=1,2,...,m\), \(\overline{x}=\tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}x_{i}\) and let the functions \(G_{n}\), \(\Upsilon \) and \(\Omega \) be defined in (10), (44) and (46). Then

\begin{align} \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\phi (x_{i})-\phi (\overline{x}) = & \sum _{k=1}^{n-1}\phi ^{(2k)}(a)(b-a)^{2k}\left[ \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\hat{\Lambda }_{k}(x_{i})-\hat{\Lambda }_{k}(\overline{x})\right] \label{majorcerone} \\ & +\sum _{k=1}^{n-1}\phi ^{(2k)}(b)(b-a)^{2k}\left[ \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\tilde{\Lambda }_{k}(x_{i})-\tilde{\Lambda }_{k}(\overline{x})\right] \nonumber \\ & +(b-a)^{2n-1}\left( \phi ^{(2n-1)}(b)-\phi ^{(2n-1)}(a)\right) \times \nonumber \end{align}
\begin{align*} & \quad \times \left\{ \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\left[ \tilde{\Lambda }_{n}\left( x_{i}\right) +\hat{\Lambda }_{n}\left( x_{i}\right) \right] -\left[ \tilde{\Lambda }_{n}\left( \overline{x}\right) +\hat{\Lambda }_{n}\left( \overline{x}\right) \right] \right\} +H_{n}^{1}(\phi ;a,b), \end{align*}

where the remainder \(H_{n}^{1}(\phi ;a,b)\) satisfies the estimation

\begin{equation} \mid H_{n}^{1}(\phi ;a,b)\mid \leq \tfrac {(b-a)^{2n-\frac{1}{2}}}{\sqrt{2}}\left[ \Omega (\Upsilon ,\Upsilon ) \right] ^{\frac{1}{2}}\left\vert \int _{a}^{b}(t-a)(b-t)\left[ \phi ^{(2n+1)}(t)\right] ^{2}dt\right\vert ^{\frac{1}{2}}.\label{remainder1}\end{equation}
49

Proof â–¼
If we apply Theorem 18 for \(f\rightarrow \Upsilon \) and \(h\rightarrow \phi ^{(2n)}\) we obtain
\begin{align*} & \left\vert \tfrac {1}{b-a}\int _{a}^{b}\Upsilon (t)\phi ^{(2n)}(t)dt-\tfrac {1}{b-a}\int _{a}^{b}\Upsilon (t)dt\cdot \tfrac {1}{b-a}\int _{a}^{b}\phi ^{(2n)}(t)dt\right\vert \leq \\ & \leq \tfrac {1}{\sqrt{2}}\left[ \Omega (\Upsilon ,\Upsilon )\right] ^{\frac{1}{2}}\tfrac {1}{\sqrt{b-a}}\left\vert \int _{a}^{b}(t-a)(b-t)\left[ \phi ^{(2n+1)}(t)\right] ^{2}dt\right\vert ^{\frac{1}{2}}. \end{align*}

Therefore we have

\begin{align*} & (b-a)^{2n-1}\int _{a}^{b}\Upsilon (t)\phi ^{(2n)}(t)dt=\\ & =(b-a)^{2n-2}\left( \phi ^{(2n-1)}(b)-\phi ^{(2n-1)}(a)\right) \int _{a}^{b}\Upsilon (t)dt+H_{n}^{1}(\phi ;a,b), \end{align*}

where the remainder \(H_{n}^{1}(\phi ;a,b)\) satisfies the estimation (49). Now from identity (17) and fact that \(\Lambda _{n}(1-t)=\int _{0}^{1}G_{n}(t,s)(1-s)ds\) (see [ 2 ] ) we obtain (48).

Proof â–¼

Integral case of the above theorem can be given:

Theorem 21

Let \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\) be such that \(\phi \in C^{(2n)}\left[ a,b\right] \) for \(n\in \mathbb {N}\) with \((.-a)(b-.)\left[ \phi ^{(2n+1)}\right] ^{2}\in L\left[ a,b\right] \), let \(x:\left[ \alpha ,\beta \right] \rightarrow \mathbb {R}\) continuous functions such that \(x(\left[ \alpha ,\beta \right] )\subseteq \left[ a,b\right] \) and \(\lambda :\left[ \alpha ,\beta \right] \rightarrow \mathbb {R}\) be as defined in Remark 10 or in Remark 13 and \(\overline{x}=\frac{\int _{\alpha }^{\beta }x(t)\, d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}\). Let the functions \(G_{n}\), \(\tilde{\Upsilon }\) and \(\Omega \) be defined in (10), (45) and (47). Then

\begin{align} \label{majorcerone_int} \frac{\int _{\alpha }^{\beta }\phi (x(t))d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}-\phi (\overline{x})=& \sum _{k=1}^{n-1}\phi ^{(2k)}(a)(b-a)^{2k}\left[ \frac{\int _{\alpha }^{\beta }\hat{\Lambda }_{k}(x(t))d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}-\hat{\Lambda }_{k}(\overline{x})\right] \\ & +\sum _{k=1}^{n-1}\phi ^{(2k)}(b)(b-a)^{2k}\left[ \frac{\int _{\alpha }^{\beta }\tilde{\Lambda }_{k}(x(t))d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}-\tilde{\Lambda }_{k}(\overline{x})\right] \nonumber \\ & +(b-a)^{2n-1}\left( \phi ^{(2n-1)}(b)-\phi ^{(2n-1)}(a)\right) \times \nonumber \end{align}
\begin{align*} & \times \left\{ \frac{\int _{\alpha }^{\beta }\left[ \tilde{\Lambda }_{n}\left( x(t)\right) +\hat{\Lambda }_{n}\left( x(t)\right) \right] d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}-\left[ \tilde{\Lambda }_{n}\left( \overline{x}\right) +\hat{\Lambda }_{n}\left( \overline{x}\right) \right] \right\} +\tilde{H}_{n}^{1}(\phi ;a,b), \end{align*}

where the remainder \(\tilde{H}_{n}^{1}(\phi ;a,b)\) satisfies the estimation

\[ \mid \tilde{H}_{n}^{1}(\phi ;a,b)\mid \leq \tfrac {(b-a)^{2n-\frac{1}{2}}}{\sqrt{2}}\left[ \Omega (\tilde{\Upsilon },\tilde{\Upsilon })\right] ^{\frac{1}{2}}\left\vert \int _{a}^{b}(s-a)(b-s)\left[ \phi ^{(2n+1)}(s)\right] ^{2}ds\right\vert ^{\frac{1}{2}}. \]

Using Theorem 19 we also get the following Grüss type inequality.

Theorem 22

Let \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\) be such that \(\phi \in C^{(2n)}\left[ a,b\right] \) for \(n\in \mathbb {N}\) and \(\phi ^{(2n+1)}\geq 0\) on \(\left[ a,b\right] \) and let the function \(\Upsilon \) be defined in (44). Then we have the representation (48) and the remainder \(H_{n}^{1}(\phi ;a,b)\) satisfies the bound

\begin{equation} \label{remainderGr1} | H_{n}^{1}(\phi ;a,b) | \leq (b-a)^{2n-1}\Vert \Upsilon ^{\prime }\Vert _{\infty }\left\{ \tfrac {\phi ^{(2n-1)}(b)+\phi ^{(2n-1)}(a)}{2}-\tfrac {\phi ^{(2n-2)}(b)-\phi ^{(2n-2)}(a)}{b-a}\right\} . \end{equation}
51

Proof â–¼
Applying Theorem 19 for \(f\rightarrow \Upsilon \) and \(h\rightarrow \phi ^{(2n)}\) we obtain
\begin{align} & \left\vert \tfrac {1}{b-a}\int _{a}^{b}\Upsilon (t)\phi ^{(2n)}(t)dt-\tfrac {1}{b-a}\int _{a}^{b}\Upsilon (t)dt\cdot \tfrac {1}{b-a}\int _{a}^{b}\phi ^{(2n)}(t)dt\right\vert \leq \nonumber \label{c2id}\\ & \leq \tfrac {1}{2(b-a)}\Vert \Upsilon ^{\prime }\Vert _{\infty }\int _{a}^{b}(t-a)(b-t)\phi ^{(2n+1)}(t)dt. \end{align}

Since

\begin{align*} & \int _{a}^{b}(t-a)(b-t)\phi ^{(2n+1)}(t)dt= \\ & =\int _{a}^{b}\left[ 2t-(a+b)\right] \phi ^{(2n)}(t)dt\\ & =(b-a)\left[ \phi ^{(2n-1)}(b)+\phi ^{(2n-1)}(a)\right] -2\left( \phi ^{(2n-2)}(b)-\phi ^{(2n-2)}(a)\right) , \end{align*}

using the identity (17) and (52) we deduce (51).

Proof â–¼

Integral version of the above theorem can be given as:

Theorem 23

Let \(\phi :\left[ a,b\right] \rightarrow \mathbb {R}\) be such that \(\phi \in C^{(2n)}\left[ a,b\right] \) for \(n\in \mathbb {N}\) and \(\phi ^{(2n+1)}\geq 0\) on \(\left[ a,b\right] \) and let the function \(\tilde\Upsilon \) be defined in (45). Then we have the representation (??) and the remainder \(\tilde H^{1}_{n}(\phi ;a,b)\) satisfies the bound

\begin{align*} \mid \tilde H^{1}_{n}(\phi ;a,b)\mid \leq (b-a)^{2n-1}\| \tilde\Upsilon ^{\prime }\| _{\infty }\left\{ \tfrac {\phi ^{(2n-1)}(b)+\phi ^{(2n-1)}(a)}{2}-\tfrac {\phi ^{(2n-2)}(b)-\phi ^{(2n-2)}(a)}{b-a}\right\} . \end{align*}

We also give the Ostrowsky type inequality related to the generalization of majorization inequality.

Theorem 24

Let \(x_{i}\in \left[ a,b\right] \), \(w_{i}\in \mathbb {R},\) \(i=1,2,...,m\), \(\overline{x}=\frac{1}{W_{m}}\sum _{i=1}^{m}w_{i}x_{i}\) and let \((p,q)\) be a pair of conjugate exponents, that is \(1\leq p,q\leq \infty \) and \(\frac{1}{p}+\frac{1}{q}=1\). Let \(\phi \in C^{(2n)}\left[ a,b\right] \) be such that \(\left\vert \phi ^{(2n)}\right\vert ^{p}:\left[ a,b\right] \rightarrow \mathbb {R}\) is an R-integrable function for some \(\mathbb {N}\). Then we have

\begin{align} & \bigg|\tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\phi (x_{i})-\phi (\overline{x})\label{ostrowskymajor} -\sum _{k=1}^{n-1}\phi ^{(2k)}(a)(b-a)^{2k}\bigg[ \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\hat{\Lambda }_{k}(x_{i})-\hat{\Lambda }_{k}(\overline{x})\bigg] \\ & -\sum _{k=1}^{n-1}\phi ^{(2k)}(b)(b-a)^{2k}\bigg[ \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}\tilde{\Lambda }_{k}(x_{i})-\tilde{\Lambda }_{k}(\overline{x})\bigg] \bigg|\leq \nonumber \\ & \leq (b-a)^{2n-1}\| \phi ^{(2n)}\| _{p}\left( \int _{a}^{b}\bigg|\tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}G_{n}\left( \tfrac {x_{i}-a}{b-a},\tfrac {t-a}{b-a}\right) \! - \! G_{n}\left( \tfrac {\overline{x}-a}{b-a},\tfrac {t-a}{b-a}\! \right) \bigg|^{q}\! dt\! \right) ^{\frac{1}{q}}\! .\nonumber \\ & \nonumber \end{align}

The constant on the right hand side of (53) is sharp for \(1{\lt}p\leq \infty \) and the best possible for \(p=1.\)

Proof â–¼
Let’s denote
\[ \Psi (t)=(b-a)^{2n-1}\left[ \tfrac {1}{W_{m}}\sum _{i=1}^{m}w_{i}G_{n}\left( \tfrac {x_{i}-a}{b-a},\tfrac {t-a}{b-a}\right) -G_{n}\left( \tfrac {\overline{x}-a}{b-a},\tfrac {t-a}{b-a}\right) \right] . \]

Using the identity (17) and applying Hölder’s inequality we obtain

\begin{align*} & \left\vert \tfrac {1}{W_{m}}{\textstyle \sum \limits _{i=1}^{m}}w_{i}\phi (x_{i})-\phi (\overline{x})-{\textstyle \sum \limits _{k=0}^{n-1}}\phi ^{(2k)}(a)(b-a)^{2k}\left[ \tfrac {1}{W_{m}}{\textstyle \sum \limits _{i=1}^{m}} w_{i}\hat{\Lambda }_{k}(x_{i})-\hat{\Lambda }_{k}(\overline{x})\right] \right. \\ & -\left. {\textstyle \sum \limits _{k=0}^{n-1}}\phi ^{(2k)}(b)(b-a)^{2k}\left[ \tfrac {1}{W_{m}}{\textstyle \sum \limits _{i=1}^{m}} w_{i}\tilde{\Lambda }_{k}(x_{i})-\tilde{\Lambda }_{k}(\overline{x})\right] \right\vert = \\ & =\left\vert \int _{a}^{b}\Psi (t)\phi ^{(2n)}(t)dt\right\vert \leq ||\phi ^{(2n)}||_{p}\left( \int _{a}^{b}|\psi (t)|^{q}ds\right) ^{1/q}. \end{align*}

For the proof of the sharpness of the constant \(\left( \int _{a}^{b}\left\vert \Psi (t)\right\vert ^{q}dt\right) ^{1/q}\) let us find a function \(\phi \) for which the equality in (53) is obtained.
For \(1{\lt}p{\lt}\infty \) take \(\phi \) to be such that

\[ \phi ^{(2n)}(t)=\operatorname{sgn}\Psi (t)\left\vert \Psi (t)\right\vert ^{\frac{1}{p-1}}. \]

For \(p=\infty \) take \(\phi ^{(2n)}(t)=\operatorname{sgn}\Psi (t)\).
For \(p=1\) we prove that

\begin{equation} \left\vert \int _{a}^{b}\Psi (t)\phi ^{(2n)}(t)dt\right\vert \leq \max _{t\in \lbrack a,b]}\left\vert \Psi (t)\right\vert \left( \int _{a}^{b}\left\vert \phi ^{(2n)}(t)\right\vert dt\right) \label{sharp_pom}\end{equation}
54

is the best possible inequality. Suppose that \(\left\vert \Psi (t)\right\vert \) attains its maximum at \(t_{0}\in \lbrack a,b].\) First we assume that \(\Psi (t_{0}){\gt}0\). For \(\varepsilon \) small enough we define \(\phi _{\varepsilon }(t)\) by

\[ \phi _{\varepsilon }(t)= \begin{cases} 0, & a\leq t\leq t_{0},\\[1mm] \frac{1}{\varepsilon \, n!}(t-t_{0})^{n}, & t_{0}\leq t\leq t_{0}+\varepsilon ,\\[1mm] \frac{1}{(n-1)!}(t-t_{0})^{n-1}, & t_{0}+\varepsilon \leq t\leq b. \end{cases} \]

Then for \(\varepsilon \) small enough

\[ \left\vert \int _{a}^{b}\Psi (t)\phi ^{(2n)}(t)dt\right\vert =\left\vert \int _{t_{0}}^{t_{0}+\varepsilon }\Psi (t)\tfrac {1}{\varepsilon }dt\right\vert =\tfrac {1}{\varepsilon }\int _{t_{0}}^{t_{0}+\varepsilon }\Psi (t)dt. \]

Now from the inequality (??) we have

\[ \tfrac {1}{\varepsilon }\int _{t_{0}}^{t_{0}+\varepsilon }\Psi (t)dt\leq \Psi (t_{0})\int _{t_{0}}^{t_{0}+\varepsilon }\tfrac {1}{\varepsilon }dt=\Psi (t_{0}). \]

Since

\[ \lim _{\varepsilon \rightarrow 0}\tfrac {1}{\varepsilon }\int _{t_{0}}^{t_{0}+\varepsilon }\Psi (t)dt=\Psi (t_{0}) \]

the statement follows. In the case \(\Psi (t_{0}){\lt}0\), we define \(\phi _{\varepsilon }(t)\) by

\[ \phi _{\varepsilon }(t)=\left\{ \begin{array}[c]{ll}\frac{1}{(n-1)!}(t-t_{0}-\varepsilon )^{n-1}, & a\leq t\leq t_{0},\\[1mm] -\frac{1}{\varepsilon n!}(t-t_{0}-\varepsilon )^{n}, & t_{0}\leq t\leq t_{0}+\varepsilon ,\\[1mm] 0, & t_{0}+\varepsilon \leq t\leq b, \end{array} \right. \]

and the rest of the proof is the same as above.

Proof â–¼

Integral version of the above theorem can be stated as:

Theorem 25

Let \(x:[\alpha ,\beta ]\rightarrow \mathbb {R}\) be continuous functions such that \(x([\alpha ,\beta ])\) \(\subseteq \lbrack a,b],\lambda :[\alpha ,\beta ]\rightarrow \mathbb {R}\) be as defined in Remark 10 or in Remark 13, \(\overline{x}=\frac{\int _{\alpha }^{\beta }x(t)\, d\lambda (t)}{\int _{\alpha }^{\beta }\, d\lambda (t)}\) and let \((p,q)\) be a pair of conjugate exponents, that is \(1\leq p,q\leq \infty \) and \(\frac{1}{p}+\frac{1}{q}=1\). Let \(\phi \in C^{(2n)}\left[ a,b\right] \) be such that \(\left\vert \phi ^{(2n)}\right\vert ^{p}:\left[ a,b\right] \rightarrow \mathbb {R}\) is an R-integrable function for some \(n\in \mathbb {N}\). Then we have

\begin{align} & \Bigg|\frac{\int _{\alpha }^{\beta }\phi (x(t))d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}-\phi (\overline{x})\label{ostrowskymajorint} -\sum _{k=1}^{n-1}\phi ^{(2k)}(a)(b-a)^{2k}\left[ \frac{\int _{\alpha }^{\beta }\hat{\Lambda }_{k}(x(t))d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}-\hat{\Lambda }_{k}(\overline{x})\right] \\ & -\sum _{k=1}^{n-1}\phi ^{(2k)}(b)(b-a)^{2k}\Bigg[ \frac{\int _{\alpha }^{\beta }\tilde{\Lambda }_{k}(x(t))d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}-\tilde{\Lambda }_{k}(\overline{x})\Bigg] \Bigg|\nonumber \leq \\ & \leq (b\! -\! a)^{2n-1}\| \phi ^{(2n)}\| _{p}\left( \int _{a}^{b}\bigg|\frac{\int _{\alpha }^{\beta }G_{n}\big( \frac{x(t)-a}{b-a},\frac{s-a}{b-a}\big) d\lambda (t)}{\int _{\alpha }^{\beta }d\lambda (t)}-G_{n}\left( \tfrac {\overline{x}-a}{b-a},\tfrac {s-a}{b-a}\right) \bigg|^{q}ds\! \right) ^{\frac{1}{q}}\! \! .\nonumber \end{align}

The constant on the right hand side of (58) is sharp for \(1{\lt}p\leq \infty \) and the best possible for \(p=1.\)

Acknowledgements

The research of the authors has been fully supported by Croatian Science Foundation under the project \(5435\).

Bibliography

1

M. Adil Khan, N. Latif and J. Pečarić, Generalizations of majorization inequality via Lidstone’s polynomial and their applications, Commun. Math. Anal. 19 (2016) no. 2, 101–122.

2

R.P. Agarwal, P.J.Y. Wong, Error Inequalities in Polynomial Interpolation and Their Applications, Kluwer Academic Publishers, Dordrecht-Boston-London, 1993.

3

G. Aras-Gazić, V. Čuljak, J. Pečarić, A. Vukelić, Generalization of Jensen’s inequality by Lidstone’s polynomial and related results, Math. Ineq. Appl. 16 (2013) no. 4, 1243–1267. \includegraphics[scale=0.1]{ext-link.png}

4

K.E. Atkinson, An Introduction to Numerical Analysis, 2nd ed., Wiley, New York, 1989.

5

P. Cerone and S.S. Dragomir, Some new Ostrowsky-type bounds for the Čebyšev functional and applications, J. Math. Inequal. 8 (2014) no. 1, 159–170. \includegraphics[scale=0.1]{ext-link.png}

6

L. Fuchs, A new proof of an inequality of Hardy-Littlewood-Pólya, Mat. Tidsskr, B (1947), 53–54, 9, 4 (2003), 607–615.

7

G.H. Hardy, J.E. Littlewood, G. Pólya, Some simple inequalities satisfied by convex functions, Messenger Mathematics, 58, pp. 145–152, 1929.

8

G.H. Hardy, J.E. Littlewood, G. Pólya, Inequalities, London and New York: Cambridge University Press, second ed., 1952.

9

S. Karlin, Total Positivity, Stanford Univ. Press, Stanford, 1968.

10

S. Karlin, W.J. Studden, Tchebycheff systems: with applications in analysis and statistics, Interscience, New York, 1966.

11

A.W. Marshall, I. Olkin and B.C. Arnold, Inequalities: Theory of Majorization and Its Applications (Second Edition), Springer Series in Statistics, New York, 2011.

12

J.E. Pečarić, F. Proschan and Y.L. Tong, Convex functions, partial orderings, and statistical applications, Mathematics in science and engineering 187, Academic Press, 1992.

13

T. Popoviciu, Sur l’approximation des fonctions convexes d’ordre superieur, Mathematica, 10 (1935), 49–54. \includegraphics[scale=0.1]{ext-link.png}

14

J.M. Whittaker, On Lidstone series and two-point expansions of analytic functions, Proc. Lond. Math. Soc., 36 (1933–1934), 451–469.

15

D.V. Widder, Completely convex function and Lidstone series, Trans. Amer. Math. Soc., 51 (1942), 387–398.

16

D.V. Widder, The Laplace Transform, Princeton Univ. Press, New Jersey, 1941.