On the refinements of Jensen-Mercer’s inequality\(^\bullet \)

M. Adil Khan,\(^{\ltimes ,\triangledown }\) Asif R. Khan\(^{\ast ,\triangledown }\) J. Pečarić\(^{\S ,\triangledown }\)

January 30, 2012.

\(^\ltimes \)Department of Mathematics, University of University of Peshawar, Pakistan,
e-mail: adilbandai@yahoo.com.

\(^\triangledown \) Abdus Salam School of Mathematical Sciences, GC University, 68-B, New Muslim Town, Lahore 54600, Pakistan

\(^\ast \)Department of Mathematical Sciences, University of Karachi, University Road, Karachi, Pakistan, e-mail: asif_rizkhan@yahoo.com.

\(^\S \)University of Zagreb, Faculty of Textile Technology Zagreb, Croatia,
e-mail: pecaric@mahazu.hazu.hr.

\(^\bullet \)This research work is funded by Higher Education Commission Pakistan. The research of the third author was supported by the Croatian Ministry of Science, Education and Sports under the Research Grants 117-1170889-0888.

In this paper we give refinements of Jensen-Mercer’s inequality and its generalizations and give applications for means. We prove \(n\)-exponential convexity of the functions constructed from these refinements. At the end we discuss some examples.

MSC. 26D15

Keywords. convex functions, Jensen’s Mercer’s inequality, \(n\)-exponential convexity.

1 Introduction

In paper [ 8 ] A. McD. Mercer proved the following variant of Jensen’s inequality, to which we will refer as to the Jensen-Mercer inequality.

Theorem 1

Let \([a,b]\) be an interval in \(\mathbb {R}\), and \(x_1,...,x_n\in [a,b]\). Let \(w_1,w_2,...,w_n\) be nonnegative real numbers such that \(\sum _{i=1}^nw_i=1\). If \(\phi \) is a convex function on \([a,b]\), then

\begin{equation} \label{MercerJenine} \phi \left(a+b-\sum _{i=1}^nw_ix_i\right)\leq \phi (a)+\phi (b)-\sum _{i=1}^nw_i\phi (x_i). \end{equation}
1

Given two real row \(n\)-tuples \(\mathbf{x}=(x_1,...,x_n)\) and \(\mathbf{y}=(y_1,...,y_n)\), \(\mathbf{y}\) is said to majorize \(\mathbf{x}\), if

\begin{equation*} \sum _{i=1}^k x_{[i]}\, \leq \, \sum _{i=1}^k y_{[i]} \end{equation*}

holds for \( k =1, 2, ..., n-1\) and

\begin{equation*} \sum _{i=1}^n x_{i}\, =\, \sum _{i=1}^n y_{i}, \end{equation*}

where \( x_{[1]}\geq ...\geq \, x_{[n]},\, \mbox{ and } y_{[1]}\geq ...\geq \, y_{[n]},\) are the entries of \(\mathbf{x}\) and \(\mathbf{y}\), respectively, in nonincreasing order (see [ 6 , p. 10 ] ).

The following extension of (1) is given in [ 9 ] .

Theorem 2

Let \(\phi :[a,b]\rightarrow \mathbb {R}\) be a continuous convex function on \([a,b]\). Suppose that \(\textbf{a}=(a_1,...,a_m)\) with \(a_j\in [a,b]\), and \(\textbf{X}=(x_{ij})\) is a real \(n\times m\) matrix such that \(x_{ij}\in [a,b]\) for all \(i=1,\ldots ,n;\, \, j=1,\ldots ,m\).

If \(\textbf{a}\) majorizes each row of \(\textbf{X}\), that is

\[ \textbf{x}_{i.}=(x_{i1},...,x_{im})\prec (a_1,...,a_m)=\textbf{a}\mbox{ for each }i=1,...,n; \]

then we have the inequality

\begin{equation} \label{niezgodaineq} \phi \left(\sum _{j=1}^{m}a_j-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix_{ij}\right)\leq \sum _{j=1}^{m}\phi (a_j)-\sum _{j=1}^{m-1}\sum _{i=1}^nw_i\phi (x_{ij}), \end{equation}
2

where \(\sum _{i=1}^{n}w_i=1\) with \(w_{i}\geq 0\).

In this paper we give refinements of (1), (2) and give applications for means. We construct functionals from these refinements and prove mean value theorems. The notion of \(n\)-exponential convexity is introduced in [ 10 ] . The class of \(n\)-exponential convex functions is more general than the class of log-convex functions. We follow the method illustrated in [ 10 ] to give the \(n\)-exponential convexity and exponential convexity for these functionals.

2 Main results

Let \(\phi :[a,b]\rightarrow \mathbb {R}\) be a convex function. If \(x_i\in [a,b]\) and \(w_i{\gt}0\), \(i\in \{ 1,2,...,n\} \) with \(\sum _{i=1}^nw_i=1\). Throughout the paper we assume that \(I\subset \{ 1,2,...,n\} \) with \(I\neq \emptyset \) and \(I\neq \{ 1,2,...,n\} \) unless stated. We define \(W_I=\sum _{i \in I}w_i\) and \(W_{\overline{I}}=1-\sum _{i \in {I}}w_i\). For the convex function \(\phi \) and the \(n\)-tuple \(\mathbf{x}=(x_1,...,x_n)\) and \(\mathbf{w}=(w_1,...,w_n)\) as above, we can define the following functional

\begin{equation} \label{functional} D(\mathbf{w},\mathbf{x},\phi ;I):=W_I\phi \left(a+b-\tfrac {1}{W_I}\sum _{i \in I}w_ix_i\right)+W_{\overline{I}}\phi \left(a+b-\tfrac {1}{W_{\overline{I}}}\sum _{i \in \overline{I}}w_ix_i\right). \end{equation}
3

It is worth to observe that for \(I=\{ k\} \), \(k\in \{ 1,...,n\} \) we have the functional

\begin{align*} \label{spefunctional} D_k(\mathbf{w},\mathbf{x},\phi ) & :=D(\mathbf{w},\mathbf{x},\phi ;\{ k\} )\nonumber \\ & =w_k\phi (a+b-x_k)+(1-w_k)\phi \left(a+b-\tfrac {\sum _{i=1}^nw_ix_i-w_kx_k}{1-w_k}\right). \end{align*}

The following refinement of (1) is valid.

Theorem 3

Let \([a,b]\) be an interval in \(\mathbb {R}\), and \(x_1,...,x_n\in [a,b]\). Let \(w_1,w_2,...,w_n\) be positive real numbers such that \(\sum _{i=1}^nw_i=1\). If \(\phi :[a,b]\rightarrow \mathbb {R}\) is a convex function, then for any non empty subset \(I\) of \(\{ 1,...,n\} \) we have

\begin{equation} \label{MercerJenineref} \phi \left(a+b-\sum _{i=1}^nw_ix_i\right)\leq D(\mathbf{w},\mathbf{x},\phi ;I)\leq \phi (a)+\phi (b)-\sum _{i=1}^nw_i\phi (x_i). \end{equation}
3

Proof â–¼
By the convexity of the function \(\phi \) we have

\begin{align*} & \phi \Big(a+b-\sum _{i=1}^nw_ix_i\Big) = \phi \Big(\sum _{i=1}^nw_i\Big(a+b-x_i\Big)\Big) \\ & = \phi \Big(W_I\Big(\tfrac {1}{W_I}\sum _{i \in I}w_i\Big(a+b-x_i\Big)\Big)+W_{\overline{I}}\Big(\tfrac {1}{W_{\overline{I}}}\sum _{i \in \overline{I}}w_i\Big(a+b-x_i\Big)\Big)\Big)\\ & \leq W_I\phi \Big(\tfrac {1}{W_I}\sum _{i \in I}w_i\Big(a+b-x_i\Big)\Big)+W_{\overline{I}}\phi \Big(\tfrac {1}{W_{\overline{I}}}\sum _{i \in \overline{I}}w_i\Big(a+b-x_i\Big)\Big)\\ & =D(\mathbf{w},\mathbf{x},\phi ;I)\hspace{8.7cm} \end{align*}

for any \(I\), which proves the first inequality in (3).

By the Jensen-Mercer inequality (1) we also have

\begin{align*} & D(\mathbf{w},\mathbf{x},\phi ;I)=W_{I}\phi \Big(a+b-\tfrac {1}{W_{I}}\sum _{i\in I}w_ix_i\Big)+ W_{\overline{I}}\phi \Big(a+b-\tfrac {1}{W_{\overline{I}}}\sum _{i\in \overline{I}}w_ix_i\Big)\\ & \leq W_{I}\Big(\phi (a)+\phi (b)-\tfrac {1}{W_{I}}\sum _{i\in I}w_i\phi (x_i)\Big)+ W_{\overline{I}}\Big(\phi (a)+\phi (b)-\tfrac {1}{W_{\overline{I}}}\sum _{i\in \overline{I}}w_i\phi (x_i)\Big) \\ & = \phi (a)+\phi (b)-\sum _{i=1}^nw_i\phi (x_i)\hspace{-4cm}. \end{align*}

for any \(I\), which proves the second inequality in (3).

Proof â–¼

Remark 4

In [ 7 ] from the proof of Theorem 2.3 we have left inequality of (3).â–¡

Remark 5

We observe that the inequality (3) can be written in an equivalent form as

\begin{equation} \label{min} \phi \left(a+b-\sum _{i=1}^nw_ix_i\right)\leq \min \limits _{I} D(\mathbf{w},\mathbf{x},\phi ;I) \end{equation}
4

and

\begin{equation} \label{max} \max \limits _{I} D(\mathbf{w},\mathbf{x},\phi ;I)\leq \phi (a)+\phi (b)-\sum _{i=1}^nw_i\phi (x_i). \end{equation}
5

The following special cases of (4) and (5) can be given:

\begin{equation*} \label{minspe} \phi \left(a+b-\sum _{i=1}^nw_ix_i\right)\leq \min \limits _{ k\in \{ 1,...,n\} } D_k(\mathbf{w},\mathbf{x},\phi ) \end{equation*}

and

\begin{equation*} \label{maxspe} \max \limits _{k\in \{ 1,...,n\} } D_k(\mathbf{w},\mathbf{x},\phi )\leq \phi (a)+\phi (b)-\sum _{i=1}^nw_i\phi (x_i).\hfil \qed \end{equation*}

The case of uniform distribution, namely, when \(w_i=\tfrac {1}{n}\) for all \(i=1,2,...,n\) is of interest as well. If we consider a natural number \(m\) with \( m\in \{ 1,2,\ldots ,n-1\} \) and if we define

\begin{equation*} \label{spefuncunif} D_m(\mathbf{x},\phi ):=\tfrac {m}{n}\phi \left(a+b-\tfrac {1}{m}\sum _{i=1}^mx_i\right)+\tfrac {n-m}{n}\phi \left(a+b-\tfrac {1}{n-m}\sum _{j=m+1}^nx_j\right) \end{equation*}

then we can state the following result:

Corollary 6

If \(\phi :[a,b]\rightarrow \mathbb {R}\) is a convex function, \(x_i\in [a,b]\), \(i\in \{ 1,2,...,n\} \), then for any \(m\in \{ 1,2,...,n-1\} \) we have

\begin{equation*} \label{Mercerrefspecialcas} \phi \left(a+b-\tfrac {1}{n}\sum _{i=1}^nx_i\right)\leq D_m(\mathbf{x},\phi )\leq \phi (a)+\phi (b)-\tfrac {1}{n}\sum _{i=1}^n\phi (x_i). \end{equation*}

In particular, we have the bounds

\[ \phi \left(a+b-\tfrac {1}{n}\sum _{i=1}^nx_i\right) \leq \min \limits _{m\in \{ 1,...,n-1\} }D_m(\mathbf{x},\phi ) \]

and

\[ \max \limits _{m\in \{ 1,...,n-1\} }D_m(\mathbf{x},\phi )\leq \phi (a)+\phi (b)-\tfrac {1}{n}\sum _{i=1}^n\phi (x_i). \]

The following refinement of (2) is valid.

Theorem 7

Let \(\phi :[a,b]\rightarrow \mathbb {R}\) be a continuous convex function on \([a,b]\). Suppose that \(\textbf{a}=(a_1,...,a_m)\) with \(a_j\in [a,b]\), and \(\textbf{X}=(x_{ij})\) is a real \(n\times m\) matrix such that \(x_{ij}\in [a,b]\) for all \(i=1,\ldots ,n;\, \, j=1,\ldots ,m\).

If \(\textbf{a}\) majorizes each row of \(\textbf{X}\), then for any non empty subset \(I\) of \(\{ 1,...,n\} \) we have

\begin{equation} \label{niezgodaineqref} \phi \left(\sum _{j=1}^{m}a_j-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix_{ij}\right)\leq \tilde{D}(\mathbf{w},\mathbf{X},\phi ;I)\leq \sum _{j=1}^{m}\phi (a_j)-\sum _{j=1}^{m-1}\sum _{i=1}^nw_i\phi (x_{ij}), \end{equation}
6

where

\begin{align} \label{tilD} & \tilde{D}(\mathbf{w},\mathbf{X},\phi ;I):=\\ & =W_{I} \phi \left(\sum _{j=1}^{m}a_j-\tfrac {1}{W_I}\sum _{j=1}^{m-1}\sum _{i\in I}w_ix_{ij}\right)+ W_{\overline{I}} \phi \left( \sum _{j=1}^{m}a_j- \tfrac {1}{W_{\overline{I}}}\sum _{j=1}^{m-1} \sum _{i\in \overline{I}}w_ix_{ij}\right),\nonumber \end{align}

\(W_{I}=\sum _{i\in I}w_i, W_{\overline{I}}=\sum _{i\in \overline{I}}w_i\), \(\sum _{i=1}^nw_i=1\) with \(w_{i}{\gt}0\).

Proof â–¼
The proof is similar to the proof of Theorem 3 but use (2) instead of (1).
Proof â–¼
As above we can give the following remark.
Remark 8
\[ \phi \left(\sum _{j=1}^{m}a_j-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix_{ij}\right)\leq \min \limits _{I}\tilde{D}(\mathbf{w},\mathbf{X},\phi ;I) \]

and

\[ \max \limits _{I}\tilde{D}(\mathbf{w},\mathbf{X},\phi ;I)\leq \sum _{j=1}^{m}\phi (a_j)-\sum _{j=1}^{m-1}\sum _{i=1}^nw_i\phi (x_{ij}).\hfil \qed \]

Remark 9

If in (2) we set \(m=2, a_1=a, a_2=b\) and \(x_{i1}=x_i\) for \(i=1,...,n\) we get (3).â–¡

An \(m \times m\) matrix \(\textbf{A}=(a_{jk})\) is said to be doubly stochastic, if \(a_{jk} \geq 0\) and \(\sum _{j=1}^{m}a_{jk}=\sum _{k=1}^{m}a_{jk}=1\) for all \(j,k=1,...,m\). It is well known [ 6 , p. 20 ] that if \(\textbf{A}\) is an \(m\times m\) doubly stochastic matrix, then

\begin{equation} \label{stocheq} \textbf{a}\textbf{A}\prec \textbf{a}\mbox{ for each real $m$-tuple } \textbf{a}=(a_1,a_2,...,a_m). \end{equation}
8

By applying Theorem 7 and (8), one obtains:

Corollary 10

Let \(\phi :[a,b]\rightarrow \mathbb {R}\) be a continuous convex function on \([a,b]\). Suppose that \(\textbf{a}=(a_1,...,a_m)\) with \(a_j\in [a,b]\) \(j=1,...,m\) and \(\textbf{A}_1, \textbf{A}_2,...,\textbf{A}_n\) are \(m\times m\) doubly stochastic matrices. Set

\[ X=(x_{ij})=\begin{pmatrix} \textbf{a}\textbf{A}_1 \\ . \\ . \\ \textbf{a}\textbf{A}_n \\ \end{pmatrix}. \]

Then inequalities in (6) hold.

Remark 11

In [ 4 ] Dragomir has given related refinements of Jensen’s inequality.â–¡

3 Applications

For \(\emptyset \neq I\subseteq \{ 1,...,n\} \) let \(A_I,G_I,H_I\) and \(M^{[r]}_I\) be the arithmetic, geometric, harmonic means, and power mean of order \(r\in \mbox{\black R}\), respectively of \(x_i\in [a,b]\), where \(0 {\lt} a {\lt} b\), formed with the positive weights \(w_i\), \(i\in I\). For \(I=\{ 1,...,n\} \) we denote the arithmetic, geometric, harmonic and power means by \(A_n,G_n,H_n\) and \(M^{[r]}_n\) respectively.

If we define

\begin{eqnarray*} \tilde{A}_I:& =& a+b-\tfrac {1}{W_{I}}\sum _{i\in I}w_ix_i=a+b-A_I\\ \tilde{G}_I:& =& \tfrac {ab}{\left(\prod \limits _{i\in I} x_i^{w_i}\right)^{\tfrac {1}{W_I}}}=\tfrac {ab}{G_I}\\ \tilde{H}_I:& =& \left(a^{-1}+b^{-1}-\tfrac {1}{W_{I}}\sum _{i\in I}w_ix^{-1}_i\right)^{-1}=\left(a^{-1}+b^{-1}-H^{-1}_I\right)^{-1}\\ \tilde{M}^{[r]}_I:& =& \begin{cases} \left(a^r+b^r-\left(M^{[r]}_I\right)^{r}\right)^{\tfrac {1}{r}}, & r\neq 0;\\ \tilde{G}_I, & r=0, \end{cases}\end{eqnarray*}

where

\[ {M}^{[r]}_I:=\begin{cases} \left(\tfrac {1}{W_{I}}\sum _{i\in I}w_ix^{r}_i\right)^{\tfrac {1}{r}}, & r\neq 0;\\ \left(\prod \limits _{i\in I} x_i^{w_i}\right)^{\tfrac {1}{W_I}}, & r=0, \end{cases} \]

then the following inequalities hold.

Theorem 12
\begin{eqnarray} \label{app1} \quad (i) \, \, \, \, \tilde{G}_n \leq \min \limits _{I} \tilde{A}_I^{W_I} \tilde{A}_{\overline{I}}^{W_{\overline{I}}}\, \, \, \, \mbox{ and } \tilde{A}_n \geq \max \limits _{I} \tilde{A}_I^{W_I} \tilde{A}_{\overline{I}}^{W_{\overline{I}}}.\hspace{3cm} \end{eqnarray}
\begin{eqnarray} \label{app2} \quad (ii) \, \, \, \, \tilde{G}_n \leq \min \limits _{I}\left[W_I\tilde{G}_I+ W_{\overline{I}}\tilde{G}_{\overline{I}}\right]\, \mbox{ and } \tilde{A}_n \geq \max \limits _{I}\left[W_I\tilde{G}_I+ W_{\overline{I}}\tilde{G}_{\overline{I}}\right]. \end{eqnarray}
Proof â–¼
(i) Applying Theorem 3 to the convex function \(\phi (x)=-\ln x\), we obtain

\begin{equation} \label{sol1} -\ln \tilde{A}_n\leq - W_I\ln \tilde{A}_I- W_{\overline{I}}\ln \tilde{A}_{\overline{I}} \leq - \ln \tilde{G}_n. \end{equation}
15

Now (13) follows from Remark 5 and (15).

(ii) Applying Theorem 3 to the convex function \(\phi (x)=\exp (x)\), and replacing \(a, b\), and \(x_i\) with \(\ln a, \ln b\), and \(\ln x_i\) respectively and using Remark 5, we obtain (14).

Proof â–¼
The following particular case of Theorem 12 is of interest.
Corollary 13
\begin{eqnarray*} (i) \, \, \, \, \tfrac {1}{\tilde{G_n}}\leq \min \limits _{I} \tfrac {1}{\tilde{H}^{W_I}_I\tilde{H_{\overline{I}}}^{W_{\overline{I}}}}\, \, \, \, \mbox{ and }\, \, \, \, \tfrac {1}{\tilde{H}_n}\geq \max \limits _{I} \tfrac {1}{\tilde{H}^{W_I}_I\tilde{H_{\overline{I}}}^{W_{\overline{I}}}}.\hspace{1cm} \end{eqnarray*}
\begin{eqnarray*} \label{corapp2} (ii) \, \, \, \, \tfrac {1}{\tilde{G}_n}\leq \min \limits _{I} \left[ \tfrac {W_I}{\tilde{G}_I}+ \tfrac {W_{\overline{I}}}{\tilde{G}_{\overline{I}}}\right]\, \, \, \, \mbox{ and }\, \, \, \, \tfrac {1}{\tilde{H}_n}\geq \max \limits _{I}\left[\tfrac {W_I}{\tilde{G}_I}+ \tfrac {W_{\overline{I}}}{\tilde{G}_{\overline{I}}}\right]. \end{eqnarray*}
Proof â–¼
Directly from Theorem 12 by the substitutions \(a\rightarrow \tfrac {1}{a}, b\rightarrow \tfrac {1}{b},x_i\rightarrow \tfrac {1}{x_i}\).
Proof â–¼
Theorem 14

For \(r\leq 1\), we have the following inequalities

\begin{eqnarray} \label{pmean} \tilde{M}^{[r]}_n\leq \min \limits _{I} \left[W_I \tilde{M}^{[r]}_I +W_{\overline{I}}\tilde{M}^{[r]}_{\overline{I}}\right],\nonumber \\ \tilde{A}_n\geq \max \limits _{I} \left[W_I \tilde{M}^{[r]}_I +W_{\overline{I}}\tilde{M}^{[r]}_{\overline{I}}\right]. \end{eqnarray}

For \(r\geq 1\), the inequalities in (16) are reversed.

Proof â–¼
For \(r\leq 1\), \(r\neq 0\), use Theorem 3 for the convex function \(\phi (x)=x^{\tfrac {1}{r}}\), and replacing \(a,b,\) and \(x_i\) with \(a^r,b^r,\) and \(x_i^r\) respectively and for \(r=0\) use Theorem 3 for the convex function \(\phi (x)=\exp (x)\); replacing \(a,b,\) and \(x_i\) with \(\ln a,\ln b,\) and \(\ln x_i\) respectively, we obtain (16) by Remark 5.
If \(r\geq 1\), then the function \(\phi (x)=x^{\tfrac {1}{r}}\) is concave, so the inequalities in (16) are reversed.
Proof â–¼
Corollary 15
\begin{eqnarray*} \tilde{H}_n\leq \min \limits _{I} \left[W_I \tilde{H}_I +W_{\overline{I}}\tilde{H}_{\overline{I}}\right],\nonumber \\ \tilde{A}_n\geq \max \limits _{I} \left[W_I \tilde{H}_I +W_{\overline{I}}\tilde{H}_{\overline{I}}\right]. \end{eqnarray*}
Remark 16

Obviously, part (ii) of Theorem 12 is also a direct consequences of Theorem 14.â–¡

Theorem 17

Let \(r,s\in \mathbb {R}\), \(r\leq s\).

(i) If \(s\geq 0,\) then

\begin{eqnarray} \label{pmean2} \left(\tilde{M}^{[r]}_n\right)^s\leq \min \limits _{I} \left[W_I \left(\tilde{M}^{[r]}_I \right)^s +W_{\overline{I}} \left(\tilde{M}^{[r]}_{\overline{I}}\right)^s\right],\nonumber \\ \left(\tilde{M}^{[r]}_n\right)^s\geq \max \limits _{I} \left[W_I \left(\tilde{M}^{[r]}_I \right)^s +W_{\overline{I}} \left(\tilde{M}^{[r]}_{\overline{I}}\right)^s\right]. \end{eqnarray}

(ii) If \(s{\lt}0\), then inequalities in (16) are reversed.

Proof â–¼
Let \(s\geq 0\). Using Theorem 3 and Remark 5 to the convex function \(\phi (x)=x^{\tfrac {s}{r}},\) and replacing \(a,b,\) and \(x_i\) with \(a^r,b^r,\) and \(x_i^r\) respectively, we obtain (16).
If \(s{\lt}0\), then the function \(\phi (x)=x^{\tfrac {s}{r}},\) is concave so inequalities in (16) are reversed.
Proof â–¼
Let \(\phi :[a,b]\rightarrow \mathbb {R}\) be a strictly monotonic and continuous function. Then for a given \(n\)- tuple \(\textbf{x}=(x_1,...,x_n)\in [a,b]^n\) and positive \(n\)- tuple \(\textbf{w}=(w_1,...,w_n)\) with \(\sum _{i=1}^{n} w_i=1\), the value

\[ M^{[n]}_{\phi }=\phi ^{-1}\left(\sum _{i=1}^{n}w_i\phi (x_i)\right) \]

is well defined and is called \(quasi-arithmetic\ mean\) of \(\textbf{x}\) with wight \(\textbf{w}\) (see for example [ 2 , p. 215 ] ). If we define

\[ \tilde{M}^{[n]}_{\phi }=\phi ^{-1}\left(\phi (a)+\phi (b)-\sum _{i=1}^{n}w_i\phi (x_i)\right), \]

then we have the following results.

Theorem 18

Let \(\phi ,\psi :[a,b]\rightarrow \mathbb {R}\) be strictly monotonic and continuous functions. If \(\psi \circ \phi ^{-1}\) is convex on \([a,b]\), then

\begin{eqnarray} \label{qmean} \psi \left(\tilde{M}^{[n]}_{\phi }\right)\leq \min \limits _{I} \left[W_I \psi \left(\tilde{M}^{[I]}_{\phi } \right) +W_{\overline{I}} \psi \left(\tilde{M}^{[\overline{I}]}_{\phi }\right)\right],\nonumber \\ \psi \left(\tilde{M}^{[n]}_{\phi }\right)\geq \max \limits _{I} \left[W_I \psi \left(\tilde{M}^{[I]}_{\phi } \right) +W_{\overline{I}} \psi \left(\tilde{M}^{[\overline{I}]}_{\phi }\right)\right], \end{eqnarray}
\[ \mbox{where }\tilde{M}^{[J]}_{\phi }=\phi ^{-1}\left(\phi (a)+\phi (b)-\tfrac {1}{W_J}\sum _{i\in J}w_i\phi (x_i)\right). \]
Proof â–¼
Applying Theorem 3 to the convex function \(f=\psi \circ \phi ^{-1}\) and replacing \(a, b\), and \(x_i\) with \(\phi (a),\phi (b)\), and \(\phi (x_i)\) respectively and then using Remark 5, we obtain (17).
Proof â–¼
Remark 19

Theorems 12, 14 and 17 follow from Theorem 18, by choosing adequate functions \(\phi \),\(\psi \) and appropriate substitutions.â–¡

4 Further generalization

Let \(E\) be a nonempty set, \(\mathfrak {A}\) be an algebra of subsets of \(E\), and \(L\) be a linear class of real valued functions \(f : E\rightarrow \mathbb {R}\) having the properties:

  • : \(f,g \in L \Rightarrow (\alpha f+\beta g)\in L\) for all \(\alpha ,\beta \in \mathbb {R}\);

  • : \(\mathbf{1}\in L\), i.e., if \(f(t)=1\) for all \(t\in E\), then \(f\in L\);

  • : \(f\in L\), \(E_1\in \mathfrak {A} \Rightarrow f.\chi _{E_1}\in \mathfrak {A}\),

where \(\chi _{E_1}\) is the indicator function of \(E_1\). It follows from \(L_2,L_3\) that \(\chi _{E_1}\in L\) for every \(E_1\in \mathfrak {A}\).

An isotonic linear functional \(A:L\rightarrow \mathbb {R}\) is a functional satisfying the following properties:

  • : \(A(\alpha f+\beta g)=\alpha A(f)+\beta A(g)\) for \(f,g\in L,\alpha ,\beta \in \mathbb {R}\);

  • : \(f\in L, f(t)\geq 0\) on \(E\Rightarrow A(f)\geq 0\);

It follows from \(L_3\) that for every \(E_1\in \mathfrak {A}\) such that \(A(\chi _{E_1}){\gt}0,\) the functional \(A_1\) defined for all \(f\in L\) as \(A_1(f)=\tfrac {A(f.\chi _{E_1})}{A(\chi _{E_1})}\) is an isotonic linear functional with \(A(\mathbf{1})=1\). Furthermore, we observe that

\begin{equation*} \label{a1} A(\chi _{E_1})+A(\chi _{E\setminus E_1})=1, \end{equation*}
\begin{equation*} A(f)=A(f.\chi _{E_1})+A(f.\chi _{E\setminus E_1}).\\ \end{equation*}


Let \(\phi :[a,b]\rightarrow \mathbb {R}\) be a continuous function. In [ 3 ] , under the above assumptions, the following variant of the Jessen inequality is proved, if \(\phi \) is convex, then

\begin{equation} \label{jensenmerfuntion} \phi (a+b-A(f))\leq \phi (a)+\phi (b)-A(\phi (f)); \end{equation}
18

if \(\phi \) is concave then the inequality (18) is reversed.

The following refinement of (18) holds.

Theorem 20

Under the above assumptions, if \(\phi \) is convex, then

\begin{eqnarray} \label{jensenmerreffunc} \phi (a+b-A(f))\leq \overline{D}(A,f,\phi ;E_1)\leq \phi (a)+\phi (b)-A(\phi (f)); \end{eqnarray}

where

\begin{align} \label{overD} & \overline{D}(A,f,\phi ;E_1):=\\ =& A(\chi _{E_1})\phi \left(a+b-\tfrac {A(f.\chi _{E_1})}{A(\chi _{E_1})}\right)+ A(\chi _{E\setminus E_1})\phi \left(a+b-\tfrac {A(f.\chi _{E\setminus E_1})}{A(\chi _{E\setminus E_1})}\right)\nonumber \end{align}

for all \(E_1\in \mathfrak {A}\) such that \(0{\lt}A(\chi _{E_1}){\lt}1\)

Proof â–¼
The first inequality follows by using definition of convex function and the second follows by using (18) for \(A_1(f)\) instead of \(A(f)\).
Proof â–¼
Remark 21

In [ 7 ] from the proof of Theorem 4.1 we have left inequality of (20).â–¡

Remark 22

We observe that the inequality (19) can be written in an equivalent form as

\begin{eqnarray*} \phi (a+b-A(f)) \leq \min \limits _{ E_1\in \mathfrak {A}} \overline{D}(A,f,\phi ;E_1) \end{eqnarray*}

and

\begin{eqnarray*} \phi (a)+\phi (b)-A(\phi (f)) \geq \max \limits _{E_1\in \mathfrak {A}} \overline{D}(A,f,\phi ;E_1).\hfil \qed \end{eqnarray*}

The following particular case of Theorem 20 is of interest:

Corollary 23

Let \((\Omega , P, \mu )\) be a probability measure space, and let \(f:\Omega \rightarrow [a,b]\) be a measurable function. Then for any continuous convex function \(\phi : [a,b]\rightarrow \mathbb {R}\), and for any set \(E_1\) in \(P\) with \(\mu (E_1),\mu (\Omega \backslash E_1){\gt}0\) we have

\begin{align*} \label{integappl} \phi \left(a+b-\int _{\Omega }f{\rm d}\mu \right) & \leq \min \limits _{ E_1\in P} \left[ \mu (E_1)\phi \left(a+b-\tfrac {1}{\mu (E_1)}\int _{E_1}f{\rm d}\mu \right)\right.\nonumber \\ & \quad \left.+ \mu (\Omega \setminus E_1)\phi \left(a+b-\tfrac {1}{\mu (\Omega \setminus E_1)}\int _{\Omega \setminus E_1}f{\rm d}\mu \right)\right] \end{align*}

and

\begin{align*} \phi (a)+\phi (b)-\int _{\Omega }\phi (f){\rm d}\mu & \geq \max \limits _{ E_1\in P} \left[ \mu (E_1)\phi \left(a+b-\tfrac {1}{\mu (E_1)}\int _{E_1}f{\rm d}\mu \right)\right.\nonumber \\ & \quad \left.+ \mu (\Omega \setminus E_1)\phi \left(a+b-\tfrac {1}{\mu (\Omega \setminus E_1)}\int _{\Omega \setminus E_1}f{\rm d}\mu \right)\right]. \end{align*}

Proof â–¼
This is a special case of Theorem 20 for the functional \(A\) defined on the class \(L^1(\mu )\) as \(A(f)=\int _{\Omega }f{\rm d}\mu \).
Proof â–¼
Remark 24

We also may obtain similar results as in Theorem 18 for the generalized quasi-arithmetic means of Mercer’s type defined in [ 3 ] , as

\[ \tilde{M}_{\phi }(f,A) = \phi ^{-1} (\phi (a) + \phi (b) - A(\phi (f))). \]

â–¡

5 \(n\)-exponential convexity of the Jensen-Mercer differences

Under the assumptions of Theorem 3 using (3) we define the following functionals:

\begin{eqnarray} \label{lin1} \Psi _1(\textbf{w},\textbf{x},\phi ) & =& D(\mathbf{w},\mathbf{x},\phi ;I)-\phi \big(a+b-\sum _{i=1}^nw_ix_i\big)\geq 0,\\ \Psi _2(\textbf{w},\textbf{x},\phi ) & =& \phi (a)+\phi (b)-\sum _{i=1}^nw_i\phi (x_i)-D(\mathbf{w},\mathbf{x},\phi ;I) \geq 0,\\ \Psi _3(\textbf{w},\textbf{x},\phi ) & =& \phi (a)+\phi (b)-\sum _{i=1}^nw_i\phi (x_i)-\phi \big(a+b-\sum _{i=1}^nw_ix_i\big)\geq 0. \end{eqnarray}

Also, under the assumptions of Theorem 7 using (6) we define the functionals as follows:

\begin{eqnarray} \Psi _4(\textbf{w},\textbf{X},\phi ) & =& \tilde{D}(\mathbf{w},\mathbf{X},\phi ;I) -\phi \big(\sum _{j=1}^{m}a_j-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix_{ij}\big)\geq 0,\\ \Psi _5(\textbf{w},\textbf{X},\phi ) & =& \sum _{j=1}^{m}\phi (a_j)-\sum _{j=1}^{m-1}\sum _{i=1}^nw_i\phi (x_{ij})- \tilde{D}(\mathbf{w},\mathbf{X},\phi ;I)\geq 0, \end{eqnarray}

\begin{equation} \Psi _6(\textbf{w},\textbf{X},\phi ) = \sum _{j=1}^{m}\phi (a_j)-\sum _{j=1}^{m-1}\sum _{i=1}^nw_i\phi (x_{ij})- \phi \big(\sum _{j=1}^{m}a_j-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix_{ij}\big)\geq 0. \end{equation}
24

Similarly, under the assumptions of Theorem 20 using (19) we define the following functionals:

\begin{eqnarray} \Psi _7(A,f,\phi ) & =& \overline{D}(A,f,\phi ;E_1)-\phi (a+b-A(f))\geq 0,\\ \Psi _8(A,f,\phi ) & =& \phi (a)+\phi (b)-A(\phi (f))- \overline{D}(A,f,\phi ;E_1)\geq 0,\\ \Psi _9(A,f,\phi ) & =& \phi (a)+\phi (b)-A(\phi (f))-\phi (a+b-A(f))\geq 0. \end{eqnarray}

Now we are in position to give mean value theorems for \(\Psi _j(.,.,\phi )\), \(j=1,2,\ldots ,9\).

Theorem 25

Let \(\phi \in C^2([a,b])\), \(\textbf{x}=(x_1,...,x_n)\in [a,b]^n\) and \(\textbf{w}=(w_1,...,w_n)\) be \(n\)-tuple of positive real numbers such that \(\sum _{i=1}^nw_i=1\). Then there exists \(c_j\in [a,b]\) such that

\begin{equation*} \label{meaneq1} \Psi _j(\textbf{w},\textbf{x},\phi )=\tfrac {\phi ”(c_j)}{2} \Psi _j(\textbf{w},\textbf{x},\phi _0), \, \, \, \hbox{where}\ \phi _0(x)=x^2; j=1,2,3. \end{equation*}
Proof â–¼
Fix \(j=1,2,3 \).
Since the functions

\[ \phi _1=\tfrac {\Gamma }{2}x^2-\phi (x),\, \, \phi _2(x)=\phi (x)-\tfrac {\gamma }{2}x^2 \]

are convex, where \(\Gamma =\max \limits _{x\in [a,b]}\phi ^{\prime \prime } (x)\) and \(\gamma =\min \limits _{x\in [a,b]}\phi ^{\prime \prime } (x)\), we have

\begin{equation} \label{profeq1} \Psi _j(\textbf{w},\textbf{x},\phi _1)\geq 0 \end{equation}
28

\begin{equation} \label{profeq2} \Psi _j(\textbf{w},\textbf{x},\phi _2)\geq 0. \end{equation}
29

From (28) and (29) we get

\[ \tfrac {\gamma }{2}\Psi _j(\textbf{w},\textbf{x},\phi _0)\leq \Psi _j(\textbf{w},\textbf{x},\phi )\leq \tfrac {\Gamma }{2}\Psi _j(\textbf{w},\textbf{x},\phi _0). \]

If \(\Psi _j(\textbf{w},\textbf{x},\phi _0)=0\) then there is nothing to prove. Suppose \(\Psi _j(\textbf{w},\textbf{x},\phi _0){\gt}0\). We have

\[ \gamma \leq \tfrac {2\Psi _j(\textbf{w},\textbf{x},\phi )}{\Psi _j(\textbf{w},\textbf{x},\phi _0)}\leq \Gamma . \]

Hence, there exists \(c_j\in [a,b]\) such that

\begin{equation*} \Psi _j(\textbf{w},\textbf{x},\phi )=\tfrac {\phi ”(c_j)}{2} \Psi _j(\textbf{w},\textbf{x},\phi _0). \end{equation*}

Proof â–¼
Theorem 26

Let \(\phi ,\psi \in C^2([a,b])\), \(\textbf{x}=(x_1,...,x_n)\in [a,b]^n\) and \(\textbf{w}=(w_1,...,w_n)\) be \(n\)-tuple of positive real numbers such that \(\sum _{i=1}^nw_i=1\). Then there exists \(c_j\in [a,b]\) such that

\begin{equation*} \label{meaneq2} \tfrac { \Psi _j(\textbf{w},\textbf{x},\phi )}{ \Psi _j(\textbf{w},\textbf{x},\psi )}=\tfrac {\phi ”(c_j)}{\psi ”(c_j)},\, \, j=1,2,3. \end{equation*}

provided that the denominators are non-zero.

Proof â–¼
Let us define

\[ g_j=a_j\phi -b_j\psi ,\, \, \, j=1,2,3, \]
\[ \mbox{ where }\hspace{0cm} a_j=\Psi _j(\textbf{w},\textbf{x},\psi ),\, \, \; \; \; \; b_j=\Psi _j(\textbf{w},\textbf{x},\phi ). \]

Obviously \(g_j\in C^2([a,b])\), by using Theorem 25 there exists \(c_j\in [a,b]\) such that

\[ \left(\tfrac {a_j\phi ”(c_j)}{2}-\tfrac {b_j\psi ”(c_j)}{2}\right)\Psi _j(\textbf{w},\textbf{x},\phi _0)=0. \]

Since \(\Psi _j(\textbf{w},\textbf{x},\phi _0)\neq 0\) (otherwise we have a contradiction with \(\Psi _j(\textbf{w},\textbf{x},\psi )\neq 0\) by Theorem 25), we get

\begin{equation*} \tfrac { \Psi _j(\textbf{w},\textbf{x},\phi )}{ \Psi _j(\textbf{w},\textbf{x},\psi )}=\tfrac {\phi ”(c_j)}{\psi ”(c_j)},\, \, j=1,2,3. \end{equation*}

Proof â–¼
Theorem 27

Let \(\phi \in C^2([a,b])\), \(\textbf{a}=(a_1,...,a_m)\) with \(a_j\in [a,b]\), and \(\textbf{X}=(x_{ij})\) is a real \(n\times m\) matrix such that \(x_{ij}\in [a,b]\) for all \(i=1,\ldots ,n;\, \, j=1,\ldots ,m\) and \(\textbf{a}\) majorizes each row of \(\textbf{X}\). Then there exists \(c_k\in [a,b]\) such that

\[ \Psi _k(\textbf{w},\textbf{X},\phi )=\tfrac {\phi ”(c_j)}{2} \Psi _k(\textbf{w},\textbf{X},\phi _0), \, \, \, \hbox{where}\ \phi _0(x)=x^2; k=4,5,6. \]
Theorem 28

Let \(\phi ,\psi \in C^2([a,b])\). Suppose that \(\textbf{a}=(a_1,...,a_m)\) with \(a_j\in [a,b]\), and \(\textbf{X}=(x_{ij})\) is a real \(n\times m\) matrix such that \(x_{ij}\in [a,b]\) for all \(i=1,\ldots ,n;\, j=1,\ldots ,m\) and \(\textbf{a}\) majorizes each row of \(\textbf{X}\). Then there exists \(c_k\in [a,b]\) such that

\[ \tfrac { \Psi _k(\textbf{w},\textbf{X},\phi )}{ \Psi _k(\textbf{w},\textbf{X},\psi )}=\tfrac {\phi ”(c_k)}{\psi ”(c_k)};\, \, k=4,5,6, \]

provided that the denominators are non-zero.

Theorem 29

Suppose \(\phi \in C^2([a,b])\) and \(L\) satisfy properties \(L_1\), \(L_2\), on a nonempty set \(E\). Assume that \(A\) is an isotonic linear functional on \(L\) with \(A(\mathbf{1})\) = 1. Let \(f\in L\) be such that \(\phi (f)\in L\). Then there exists \(c_j\in [a,b]\) such that

\begin{equation*} \Psi _j(A,f,\phi )=\tfrac {\phi ”(c)}{2} \Psi _j(A,f,\phi _0), \, \, \, \hbox{where}\ \phi _0(x)=x^2; j=7,8,9. \end{equation*}
Theorem 30

Suppose \(\phi ,\psi \in C^2([a,b])\) and \(L\) satisfy properties \(L_1\), \(L_2\), on a nonempty set \(E\). Assume that \(A\) is an isotonic linear functional on \(L\) with \(A(\mathbf{1})\) = 1. Let \(f\in L\) be such that \(\phi (f),\psi (f)\in L\). Then there exists \(c_j\in [a,b]\) such that

\begin{equation*} \tfrac { \Psi _j(A,f,\phi )}{ \Psi _j(A,f,\psi )}=\tfrac {\phi ”(c_j)}{\psi ”(c_j)},\, \, j=7,8,9. \end{equation*}

provided that the denominators are non-zero.

Remark 31

If the inverse of \(\tfrac {\phi ”}{\psi ”}\) exists, then from the above mean value theorems we can give generalized means

\begin{equation} \label{genmean} c_j=\left(\tfrac {\phi ”}{\psi ”}\right)^{-1}\left(\tfrac { \Psi _j(.,.,\phi )}{ \Psi _j(.,.,\psi )}\right),\, \, j=1,2,\ldots ,9.\hfil \qed \end{equation}
30

Definition 32 [ 10 ]

A function \( \phi : J\rightarrow \mathbb {R} \) is n-exponentially convex  in the Jensen sense on the interval \(J\) if

\[ \sum _{k,l=1}^{n} \alpha _{k} \alpha _{l} \phi \left(\tfrac { x_{k} + x_{l}}{2} \right) \geq 0 \]

holds for \( \alpha _{k} \in \mathbb {R} \) and \( x_{k} \in J \), \( k = 1,2,...,n \).

A function \( \phi : J\rightarrow \mathbb {R} \) is n-exponentially convex  if it is n-exponentially convex  in the Jensen sense and continuous on \(J\).

Remark 33

From the definition it is clear that \(1\)-exponentially convex functions in the Jensen sense are in fact nonnegative functions. Also, \(n\)-exponentially convex functions in the Jensen sense are \(m\)-exponentially convex in the Jensen sense for every \(m \in \mathbb {N}, m \leq n\).â–¡

Proposition 34

If \( \phi : J \rightarrow \mathbb {R} \) is an \(n\)-exponentially convex function, then the matrix \(\Big[\phi \left( \tfrac {x_{k}+x_{l}}{2} \right) \Big]_{k,l=1}^{m}\) is a positive semi-definite matrix for all \(m\in \mathbb {N},m\leq n\). Particularly,

\[ \det \left[\phi \left( \tfrac {x_{k}+x_{l}}{2} \right) \right]_{k,l=1}^{m} \geq 0 \]

for all \( m \in \mathbb {N} \), \( m = 1,2,...,n \).

Definition 35

A function \( \phi : J \rightarrow \mathbb {R} \) is exponentially convex in the Jensen sense on \(I\) if it is \(n\)-exponentially convex in the Jensen sense for all \(n \in \mathbb {N}\). A function \( \phi : J \rightarrow \mathbb {R} \) is exponentially convex if it is exponentially convex in the Jensen sense and continuous.

Remark 36

It is easy to show that \( \phi : [a,b] \rightarrow \mathbb {R}^{+} \) is \(\log \)-convex in the Jensen sense if and only if

\[ \alpha ^2\phi (x)+2\alpha \beta \phi \left(\tfrac {x+y}{2}\right)+\beta ^2\phi (y)\geq 0 \]

holds for every \(\alpha ,\beta \in \mbox{\black R}\) and \(x,y\in [a,b]\). It follows that a function is log-convex in the Jensen-sense if and only if it is \(2\)-exponentially convex in the Jensen sense.

Also, using basic convexity theory it follows that a function is \(\log \)-convex if and only if it is \(2\)-exponentially convex.â–¡

When dealing with functions with different degree of smoothness divided differences are found to be very useful.
Definition 37

The second order divided difference of a function \(\phi :[a,b]\linebreak \rightarrow \mbox{\black R}\) at mutually different points \(y_0,y_1,y_2\in [a,b]\) is defined recursively by

\[ [y_i;\phi ]=\phi (y_i),\, \, i=0,1,2 \]
\[ [y_i,y_{i+1};\phi ]=\tfrac {\phi (y_{i+1})-\phi (y_i)}{y_{i+1}-y_i},\, \, i=0,1 \]

\begin{equation} \label{divid} [y_0,y_{1},y_{2};\phi ]=\tfrac {[y_1,y_{2};\phi ]-[y_0,y_{1};\phi ]}{y_{2}-y_0}. \end{equation}
31

Remark 38

The value \([y_0,y_1,y_2;\phi ]\) is independent of the order of the points \(y_0,y_1\), and \(y_2\). By taking limits this definition may be extended to include the cases in which any two or all three points coincide as follows: \(\forall y_0,\, y_1,\, y_2 \in [a,b]\)

\begin{equation*} \label{divi} \lim \limits _{y_1 \rightarrow y_0 } [y_0,y_1,y_2;\phi ] =[y_0,y_0,y_2;\phi ]=\tfrac {f(y_2)-f(y_0)-{\phi ^{'}(y_0)}(y_2-y_0)}{{(y_2-y_0)}^2},\, \, \, \, y_2\neq y_0 \end{equation*}

provided that \(\phi '\) exists, and furthermore, taking the limits \(y_i\rightarrow y_0, i=1,2\) in (31), we get

\begin{equation*} \label{2} [y_0,y_0,y_0;\phi ]\, \, =\, \lim \limits _{y_i \rightarrow y_0 }[y_0,y_1,y_2;\phi ]=\tfrac {{\phi ^{''}(y_0)}}{2}\, \, \text{for}\, \, \, i=1,2 \end{equation*}

provided that \(\phi ^{''}\) exists on [a,b].â–¡

We use an idea from [ 5 ] to give an elegant method of producing an \(n\)-exponentially convex functions and exponentially convex functions applying the functionals \(\Psi _j(.,.,\phi )\), \(j=1,\ldots ,9\), on a given family with the same property.

Theorem 39

Let \(\Lambda =\{ \phi _t:t\in J\} \), where \(J\) is an interval in \(\mathbb {R}\), be a family of functions defined on an interval \([a,b]\), such that the function \(t\rightarrow [y_0,y_1,y_2;\phi _t]\) is \(n\)-exponentially convex in the Jensen sense on \(J\) for every three mutually different points \(y_0,y_1,y_2\in [a,b]\). Let \(\Psi _j(.,.,\phi _t)\) \((j=1,2,\ldots ,9)\) be linear functionals defined as in (19)–(27). Then \(t\rightarrow \Psi _j(.,.,\phi _t)\) is an \(n\)-exponentially convex function in the Jensen sense on \(J\). If the function \(t\rightarrow \Psi _j(.,.,\phi _t)\) is continuous on \(J\), then it is \(n\)-exponentially convex on \(J\).

Proof â–¼
Fix \(1\leq j \leq 9\).
Let us define the function

\[ \omega (y)=\sum _{k,l=1}^{n}b_{k}b_{l}\phi _{t_{kl}}(y), \]

where \(t_{kl}=\tfrac {t_{k}+t_{l}}{2}\), \(t_{k}\in J,b_k\in \mathbb {R}\), \(k=1,2,...,n\).
Since the function \(t\rightarrow [y_0,y_1,y_2;\phi _t]\) is \(n\)-exponentially convex in the Jensen sense, we have

\[ [y_0,y_1,y_2;\omega ]=\sum _{k,l=1}^{n}b_{k}b_{l}[y_0,y_1,y_2;\phi _{t_{kl}}]\geq 0, \]

which implies that \(\omega \) is a convex function on \([a,b]\) and therefore we have \(\Psi _j(.,.,\omega )\geq 0\); \(j=1,2,...,9\). Hence

\[ \sum _{k,l=1}^{n}b_{k}b_{l}\Psi _j(.,.,\phi _{t_{kl}})\geq 0. \]

We conclude that the function \(t\rightarrow \Psi _j(.,.,\phi _t)\) is an \(n\)-exponentially convex function in the Jensen sense on \(J\).

If the function \(t\rightarrow \Psi _j(.,.,\phi _t)\) is continuous on \(J\), then it is \(n\)-exponentially convex on \(J\) by definition.

Proof â–¼
As a consequence of the above theorem we can give the following corollary.
Corollary 40

Let \(\Lambda =\{ \phi _t:t\in J\} \), where \(J\) is an interval in \(\mathbb {R}\), be a family of functions defined on an interval \([a,b]\), such that the function \(t\rightarrow [y_0,y_1,y_2;\phi _t]\) is exponentially convex in the Jensen sense on \(J\) for every three mutually different points \(y_0,y_1,y_2\in [a,b]\). Let \(\Psi _j(.,.,\phi _t)\) \((j=1,2,\ldots ,9)\) be linear functionals defined as in (19)–(27). Then \(t\rightarrow \Psi _j(.,.,\phi _t)\) is an exponentially convex function in the Jensen sense on \(J\). If the function \(t\rightarrow \Psi _j(.,.,\phi _t)\) is continuous on \(J\), then it is exponentially convex on \(J\).

Corollary 41

Let \(\Lambda =\{ \phi _t:t\in J\} \), where \(J\) is an interval in \(\mathbb {R}\), be a family of functions defined on an interval \([a,b]\), such that the function \(t\rightarrow [y_0,y_1,y_2;\phi _t]\) is \(2\)-exponentially convex in the Jensen sense on \(J\) for every three mutually different points \(y_0,y_1,y_2\in [a,b]\). Let \(\Psi _j(.,.,\phi _t)\) \((j=1,2,\ldots ,9)\) be linear functionals defined as in (19)–(27). Then the following statements hold: (i) If the function \(t\rightarrow \Psi _j(.,.,\phi _t)\) is continuous on \(J\), then it is \(2\)-exponentially convex on \(J\), and thus log convex on \(J\).
(ii) If the function \(t\rightarrow \Psi _j(.,.,\phi _t)\) is strictly positive and differentiable on \(J\), then for every \(s,t,u,v\in J\), such that \(s\leq u\) and \(t\leq v\), we have

\begin{equation} \label{meangenmono} \mathfrak { B}_{s,t}(.,.,\Psi _j,\Lambda )\leq \mathfrak { B}_{u,v}(.,.,\Psi _j,\Lambda ) \end{equation}
32

where

\begin{equation} \label{meangen} \mathfrak { B}^{j}_{s,t}(\Lambda )=\mathfrak { B}_{s,t}(.,.,\Psi _j,\Lambda )=\begin{cases} \left(\tfrac {\Psi _j(.,.,\phi _s)}{\Psi _j(.,.,\phi _t)}\right)^{\tfrac {1}{s-t}}, & s\neq t,\\ \exp \left(\tfrac {\tfrac {d}{ds}\Psi _j(.,.,\phi _s)}{\Psi _j(.,.,\phi _s)}\right), & s=t, \end{cases} \end{equation}
33

for \(\phi _s,\phi _t\in \Lambda \).

Proof â–¼

  • See Remark 36 and Theorem 39.

  • From the definition of convex function \(\phi \), we have the following inequality [ 11 , p.2 ]

    \begin{equation} \label{a5} \tfrac {\phi \left(s\right)\, -\, \phi \left(t\right)}{s\, -\, t}\, \leq \, \tfrac {\phi \left(u\right)\, -\, \phi \left(v\right)}{u\, -\, v}, \end{equation}
    36

    \( \forall \, s, t,u,v \in J\) such that \(s\leq u,\, t\leq v,\, s\neq t,\, u\neq v\).
    Since by (i), \( \Psi _j(.,.,\phi _s) \) is \(\log \)-convex, so set \(\phi (x)=\ln \Psi _j(.,.,\phi _x)\) in (36) we have

    \begin{equation} \label{eq7} \tfrac {\ln \Psi _j(.,.,\phi _s)\, -\ln \Psi _j(.,.,\phi _t)}{s-t} \, \leq \, \tfrac {\ln \Psi _j(.,.,\phi _u)-\ln \Psi _j(.,.,\phi _v)}{u-v} \end{equation}
    37

    for \(s\leq u,\, t\leq v,\, s\neq t,\, u\neq v\), which equivalent to (32). The cases for \(s= t,\, u= v\) follow from (36) by taking limit.

Proof â–¼
Remark 42

In [ 1 ] authors gave related results for the Jensen Mercer inequality.â–¡

6 Examples

In this section we will vary on choice of family of functions in order to give some examples of exponentially convex functions and to construct some means in the same way as given in [ 5 ] and [ 10 ] . For simplicity we assume that \(J(\textbf{a},X,\textbf{w})=\sum _{j=1}^{m}a_j-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix_{ij}\). Let \(\phi _t\) be any function, \(t\in J\) where \(J\) is an interval in \(\mbox{\black R}\), we apply the conditions:

\[ \lim _{t\rightarrow t_0}A(\phi _t)=A(\lim _{t\rightarrow t_0}\phi _t), \]
\[ \lim _{t\rightarrow t_0}\tfrac {A(\phi _{t+\Delta t})-A(\phi _t)}{\Delta t}=A\left(\lim _{t\rightarrow t_0}\tfrac {\phi _{t+\Delta t}-\phi _t}{\Delta t}\right). \]

Example 43

Let

\begin{equation*} \Lambda _1= \{ \psi _t:\mathbb {R}\rightarrow [0,\infty ):\, t\in \mathbb {R}\} \end{equation*}

be the family of functions defined by

\begin{equation*} \label{l2.101} \psi _{t}(x)\, =\, \left\{ \begin{array}{ll} \tfrac {1}{t^{2}}\, {\rm e}^{tx}, & \hbox{t\, $\neq $ 0}, \\ \tfrac {1}{2}\, x^{2}, & \hbox{t\, = 0.} \end{array} \right. \end{equation*}

Since, \(\psi _{t}(x)\) is a convex function on \(\mathbb {R}\) and \(\psi ''_{t}(x)\) is exponentially convex function [ 5 ] , using analogous arguing as in the proof of Theorems 39 we have that \(t\mapsto [y_0,y_1,y_2;\psi _t]\) is exponentially convex (and so exponentially convex in the Jensen sense). Using Corollary 40 we conclude that \(t\mapsto \Psi _j(.,.,\psi _t);\, \, j=1,...,9\) are exponentially convex in the Jensen sense. It is easy to see that these mappings are continuous, so they are exponentially convex.
Assume that \(t\mapsto \Psi _j(.,.,\psi _t){\gt}0\) \((j=1,2,...,9)\). By using this family of convex functions in (30) for \(j=1,2,\ldots ,9\), we obtain the following means:

\begin{eqnarray*} \Gamma ^{j}_{s,t}\, \, =\, \left\{ \begin{array}{ll} \tfrac {1}{s-t}\ln \left(\tfrac { \Psi _j(.,., \psi _s)}{\Psi _j(.,., \psi _t)}\right),\, \, \, & s\neq t, \\ \tfrac { \Psi _j(.,.,id. \psi _s)}{\Psi _j(.,., \psi _s)}-\tfrac {2}{s} , \, \, \, & s=t\neq 0, \\ \tfrac { \Psi _j(.,.,id. \psi _0)}{3\Psi _j(.,., \psi _0)},\, \, \, & s=t=0. \end{array} \right. \end{eqnarray*}

In particular for \(j=6\) we have

\begin{eqnarray*} \Gamma ^{6}_{s,t}& =& \tfrac {1}{s-t}\ln \left(\tfrac {t^2\big(\sum _{j=1}^{m}e^{sa_j}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ie^{sx_{ij}}- e^{sJ(\textbf{a},X,\textbf{w})}\big) } {s^2\big(\sum _{j=1}^{m}e^{ta_j}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ie^{tx_{ij}}- e^{tJ(\textbf{a},X,\textbf{w})}\big) }\right),\, s\neq t;\; s,t\neq 0,\\ \Gamma ^{6}_{s,s}& =& \tfrac {\sum _{j=1}^{m}a_je^{sa_j}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix_{ij}e^{sx_{ij}} -J(\textbf{a},X,\textbf{w})e^{sJ(\textbf{a},X,\textbf{w})} } {\sum _{j=1}^{m}e^{sa_j}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ie^{sx_{ij}}- e^{sJ(\textbf{a},X,\textbf{w})} }-\tfrac {2}{s},\, s\neq 0,\\ \Gamma ^{6}_{s,0}& =& \tfrac {1}{s}\ln \left(\tfrac {2\big(\sum _{j=1}^{m}e^{sa_j}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ie^{sx_{ij}}- e^{sJ(\textbf{a},X,\textbf{w})}\big) } {s^2\big(\sum _{j=1}^{m}a_j^{2}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix_{ij}^{2}- J^{2}(\textbf{a},X,\textbf{w})\big) }\right),\, s\neq 0,\\ \Gamma ^{6}_{0,0}& =& \tfrac {\sum _{j=1}^{m}a_j^{3}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix^{3}_{ij} -J^{3}(\textbf{a},X,\textbf{w}) } {3(\sum _{j=1}^{m}a_j^{2}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix^{2}_{ij} -J^{2}(\textbf{a},X,\textbf{w}))} .\\ \end{eqnarray*}

Since \(\Gamma ^{j}_{s,t}= \ln \mathfrak {B}^{j}_{s,t}(\Lambda _1)\) \((j=1,2,...,9)\), so by (32) these means are monotonic.â–¡

Example 44

Let

\begin{equation*} \Lambda _2=\{ \varphi _t:(0,\infty )\rightarrow \mathbb {R}:t\in \mathbb {R}\} \end{equation*}

be the family of functions defined by

\begin{equation*} \varphi _t(x)\, =\, \left\{ \begin{array}{rcl} \tfrac {x^{t}}{t(t-1)}, & \hbox{t $\neq $0,1}, \\ -\ln x, & \hbox{t=0}, \\ x\ln x, & \hbox{t=1}. \end{array} \right. \end{equation*}

Since \(\varphi _t(x)\) is a convex function for \(x\in \mathbb {R}^{+}\) and \(t\rightarrow \varphi ''_{t}(x)\) is exponentially convex, so by the same arguments given in previous example we conclude that \(\Psi _j(.,.,\varphi _t);\, \, j=1,...,9\) are exponentially convex. We assume that \([a,b]\subset \mathbb {R}^+\) and \(\Psi _j(.,.,\varphi _t){\gt}0 (j=1,...,9)\). By using this family of convex functions in (30) for \(j=1,2,\ldots ,9\) we have the following means:

\begin{eqnarray*} \tilde{\Gamma }^{j}_{s,t} =\, \left\{ \begin{array}{ll} \left(\tfrac { \Psi _j(.,., \varphi _s)}{\Psi _j(.,., \varphi _t)}\right)^{\tfrac {1}{s-t}},& s\neq t, \\ \exp \Big(\tfrac {1-2s}{s(s-1)}-\tfrac { \Psi _j(.,.,\varphi _0 \varphi _s)}{\Psi _j(.,., \varphi _s)}\Big),& s = t\neq 0,1, \\ \exp \Big(1-\tfrac { \Psi _j(.,.,{\varphi _0}^2)}{2\Psi _j(.,., \varphi _0)}\Big),& s=t=0, \\ \exp \Big(-1-\tfrac { \Psi _j(.,.,\varphi _0 \varphi _1)}{2\Psi _j(.,., \varphi _1)}\Big),& s=t=1. \end{array} \right. \end{eqnarray*}

In particular for \(j=6\) we have

\begin{align*} & \tilde{\Gamma }^{6}_{s,t}\! =\! \left(\tfrac {t(t-1)}{s(s-1)}.\tfrac {\sum _{j=1}^{m}a_j^{s}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix^{s}_{ij}- J^{s}(\textbf{a},X,\textbf{w}) } {\sum _{j=1}^{m}a_j^{t}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix^{t}_{ij}- J^{t}(\textbf{a},X,\textbf{w}) }\right)^\frac {1}{s-t},s\neq t;\; s,t\neq 0,1,\\ & \tilde{\Gamma }^{6}_{s,s}=\exp \left(\tfrac {1-2s}{s(s-1)} \right.\\ & \quad \ \quad \left.-\tfrac {\sum _{j=1}^{m}\ln a_j a_j^{s}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_i\ln x_{ij}x^{s}_{ij}- \ln J(\textbf{a},X,\textbf{w}) J^{s}(\textbf{a},X,\textbf{w}) } {\sum _{j=1}^{m}a_j^{s}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix^{s}_{ij}- J^{s}(\textbf{a},X,\textbf{w}) } \right),s\neq 0,1,\\ & \tilde{\Gamma }^{6}_{s,0}=\left(\tfrac {\sum _{j=1}^{m}a_j^{s}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix^{s}_{ij}- J^{s}(\textbf{a},X,\textbf{w}) } {s(1-s)\sum _{j=1}^{m}\ln a_j-\sum _{j=1}^{m-1}\sum _{i=1}^nw_i\ln x_{ij}-\ln J(\textbf{a},X,\textbf{w}) }\right)^\frac {1}{s}, s\neq 0,\\ & \tilde{\Gamma }^{6}_{s,1}=\left(\tfrac {\sum _{j=1}^{m}a_j^{s}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix^{s}_{ij}- J^{s}(\textbf{a},X,\textbf{w}) } {s(s-1)\sum _{j=1}^{m}a_j\ln a_j-\sum _{j=1}^{m-1}\sum _{i=1}^nw_i x_{ij}\ln x_{ij}-J(\textbf{a},X,\textbf{w})\ln J(\textbf{a},X,\textbf{w}) }\right)^\frac {1}{s-1} s\neq 0,\\ & \tilde{\Gamma }^{6}_{0,0}=\exp \left(1-\tfrac {\sum _{j=1}^{m}\ln ^{2} a_j -\sum _{j=1}^{m-1}\sum _{i=1}^nw_i\ln ^{2} x_{ij}- \ln ^{2}J(\textbf{a},X,\textbf{w}) } {2(\sum _{j=1}^{m}\ln a_j -\sum _{j=1}^{m-1}\sum _{i=1}^nw_i\ln x_{ij}- \ln J(\textbf{a},X,\textbf{w})) }\right),\\ & \tilde{\Gamma }^{6}_{1,1}=\exp \left(-1-\tfrac {\sum _{j=1}^{m}a_j\ln ^{2} a_j -\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix_{ij}\ln ^{2}x_{ij}- J(\textbf{a},X,\textbf{w})\ln ^{2}J(\textbf{a},X,\textbf{w}) } {2(\sum _{j=1}^{m}a_j\ln a_j -\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix_{ij}\ln x_{ij}- J(\textbf{a},X,\textbf{w})\ln J(\textbf{a},X,\textbf{w})) }\right). \end{align*}

Since \(\tilde{\Gamma }^{j}_{s,t}= \mathfrak {B}^{j}_{s,t}(\Lambda _2)\) \((j=1,2,...,9)\), so by (32) these means are monotonic.â–¡

Example 45

Let

\begin{equation*} \Lambda _3= \{ \theta _t:(0,\infty )\rightarrow (0,\infty ):\, t\in (0,\infty )\} \end{equation*}


be the family of functions defined by

\begin{eqnarray*} \theta _t(x)=\tfrac {{\rm e}^{-x\sqrt{t}}}{t}. \end{eqnarray*}

Since \(t\rightarrow \tfrac {d^2}{dx^2}\theta _{t}(x)= {\rm e}^{-x\sqrt{t}}\) is exponentially convex, being the Laplace transform of a non-negative function [ 5 ] , so by same argument given in Example 43 we conclude that \(\Psi _j(.,.,\theta _t);\, \; j=1,...,9\) are exponentially convex. We assume that \([a,b]\, \subset \, \mathbb {R}^+\) and \(\Psi _j(.,.,\theta _t)\, {\gt}\, 0 (j=1,...,9)\). For this family of convex functions \(\mathfrak { B}^{j}_{s,t}(\Lambda _3)\) \((j=1,2,\ldots ,9)\) from (33) become

\begin{eqnarray*} \mathfrak { B}^{j}_{s,t}(\Lambda _3)\, \, =\, \left\{ \begin{array}{ll} \left(\tfrac { \Psi _j(.,., \theta _s)}{\Psi _j(.,., \theta _t)}\right)^{\tfrac {1}{s-t}},& s\neq t, \\ \exp \Big(-\tfrac {\Psi _j(.,.,id. \theta _s)}{2\sqrt{s}(\Psi _j(.,., \theta _s))}-\tfrac {1}{s}\Big),& s=t. \end{array} \right. \end{eqnarray*}

In particular for \(j=6\) we have

\begin{align*} & \mathfrak { B}^{6}_{s,t}(\Lambda _3)=\left(\tfrac {t}{s}.\tfrac {\sum _{j=1}^{m}e^{-a_j\sqrt{s}}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ie^{-x_{ij}\sqrt{s}}- e^{-J(\textbf{a},X,\textbf{w})\sqrt{s}} } {\sum _{j=1}^{m}e^{-a_j\sqrt{t}}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ie^{-x_{ij}\sqrt{t}}- e^{-J(\textbf{a},X,\textbf{w})\sqrt{t}} }\right)^\frac {1}{s-t},\, s\neq t,\\ & \mathfrak { B}^{6}_{s,s}(\Lambda _3)=\\ & =\exp \left(-\tfrac {1}{2\sqrt{s}}\tfrac {\sum _{j=1}^{m}a_je^{-a_j\sqrt{s}}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix_{ij}e^{-x_{ij}\sqrt{s}}- J(\textbf{a},X,\textbf{w})e^{-J(\textbf{a},X,\textbf{w})\sqrt{s}} } {\sum _{j=1}^{m}e^{-a_j\sqrt{s}}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ie^{-x_{ij}\sqrt{s}}- e^{-J(\textbf{a},X,\textbf{w})\sqrt{s}} }-\tfrac {1}{s}\right). \end{align*}

Monotonicity of \(\mathfrak { B}^{j}_{s,t}(\Lambda _3)\) follows from (32). By (30)

\[ \bar{\Gamma }^{j}_{s,t}= -(\surd s+\surd t)\ln \mathfrak {B}^{j}_{s,t}(\Lambda _3) \; \; \; (j=1,2,...,9) \]

defines a class of means.â–¡

Example 46

Let

\begin{equation*} \Lambda _4=\{ \phi _t:(0,\infty )\rightarrow (0,\infty ): t\in (0,\infty )\} \end{equation*}

be the family of functions defined by

\begin{eqnarray*} \phi _t(x)= \left\{ \begin{array}{ll} \tfrac {t^{-x}}{(\ln t)^2},& t\neq 1, \\ \tfrac {x^{2}}{2},& t=1. \end{array} \right. \end{eqnarray*}

Since \({\tfrac {d^2}{dx^2}\phi _{t}(x)}= t^{-x}={\rm e}^{-xlnt}{\gt}0\), for \(x{\gt}0\), so by same argument given in Example 43 we conclude that \(t\rightarrow \Psi _j(.,.,\phi _t);\, \, j=1,...,9\) are exponentially convex. We assume that \([a,b]\subset \mathbb {R}^+\) and \(\Psi _j(.,.,\phi _t){\gt}0 (j=1,...,9)\). For this family of convex functions \(\mathfrak { B}^{j}_{s,t}( \Lambda _4)\) \((j=1,2,\ldots ,9)\) from (33) become

\begin{eqnarray*} \mathfrak { B}^{j}_{s,t}( \Lambda _4) = \left\{ \begin{array}{ll} \left(\tfrac { \Psi _j(.,., \phi _s)}{\Psi _j(.,., \phi _t)}\right)^{\tfrac {1}{s-t}},& s\neq t, \\ \exp \Big(-\tfrac {\Psi _j(.,.,id. \phi _s)}{s\Psi _j(.,., \phi _s)}-\tfrac {2}{s\ln s}\Big),& s=t\neq 1, \\ \exp \Big(\tfrac {1}{3}\tfrac { \Psi _j(.,.,id. \phi _1)}{\Psi _j(.,., \phi _1)}\Big),& s=t=1, \end{array} \right. \end{eqnarray*}

In particular for \(j=6\) we have

\begin{align*} \mathfrak { B}^{6}_{s,t}(\Lambda _4)& =\left(\tfrac {(\ln t)^2}{(\ln s)^2}.\tfrac {\sum _{j=1}^{m}s^{-a_j}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_is^{-x_{ij}}- s^{-J(\textbf{a},X,\textbf{w})} } {\sum _{j=1}^{m}t^{-a_j}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_it^{-x_{ij}}- t^{-J(\textbf{a},X,\textbf{w})} }\right)^\frac {1}{s-t},s\neq t,\\ \mathfrak { B}^{6}_{s,s}( \Lambda _4)& = \exp \left(-\tfrac {1}{s}\tfrac {\sum _{j=1}^{m}a_js^{-a_j}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix_{ij}s^{-x_{ij}}- J(\textbf{a},X,\textbf{w})s^{-J(\textbf{a},X,\textbf{w})} } {\sum _{j=1}^{m}s^{-a_j}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_is^{-x_{ij}}- s^{-J(\textbf{a},X,\textbf{w})} }\right.\\ & \quad \quad \quad \left.-\tfrac {2}{s\ln s}\right),s\neq 1,\\ \mathfrak { B}^{6}_{s,1}( \Lambda _4)& =\left(\tfrac {2(\sum _{j=1}^{m}s^{-a_j}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_is^{-x_{ij}}- s^{-J(\textbf{a},X,\textbf{w})} )} {(\ln s)^{2}[\sum _{j=1}^{m}a_j^{2}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix_{ij}^{2}- J^{2}(\textbf{a},X,\textbf{w}) ]}\right)^{\tfrac {1}{s-1}},\\ \mathfrak { B}^{6}_{1,1}( \Lambda _4)& =\tfrac {\sum _{j=1}^{m}a_j^{3}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix_{ij}^{3}- J^{3}(\textbf{a},X,\textbf{w}) } {3(\sum _{j=1}^{m}a_j^{2}-\sum _{j=1}^{m-1}\sum _{i=1}^nw_ix^{2}_{ij}- J^{2}(\textbf{a},X,\textbf{w})) }. \end{align*}

Monotonicity of \(\mathfrak { B}^{j}_{s,t}( \Lambda _4)\) follows from (32). By (30)

\[ \hat{\Gamma }^{j}_{s,t}= -L(s,t)\ln \mathfrak {B}^{j}_{s,t}(\Lambda _4) \; \; \; (j=1,2,...,9) \]

defines a class of means, where \(L(s,t)\) is Logarithmic mean defined as:

\begin{equation*} L(s,t)\, =\, \left\{ \begin{array}{rcl} \tfrac {s-t}{\ln s-\ln t }, & \hbox{s $\neq $ t}, \\ s, & \hbox{s=t}. \end{array} \right. \end{equation*}

â–¡

Bibliography

1

M. Anwar and J. Pečarić, Cauchy means of Mercer’s type, Utilitas Mathematica, 84, pp. 201–208, 2011.

2

P.S. Bullen, D.S. Mitrinović and P.M. Vasić, Means and Their Inequalities, Reidel, Dordrecht, 1988.

3

W.S. Cheung, A. Matković and J. Pečarić, A variant of Jessen’s inequality and generalized means, JIPAM, 7(1), Article 10, 2006.

4

S.S. Dragomir, A new refinement of Jensen’s inequality in linear spaces with applications, Mathematical and Computer Modelling, 52, pp. 1497-1505, 2010.

5

J. Jakšetić and J. Pečarić, Exponential Convexity Method, J. Convex Anal., to appear.

6

A.W. Marshall, I. Olkin and B.C. Arnold, Inequalities: Theory of majorization and its applications (Second Edition), Springer Series in Statistics, New York 2011.

7

A. Matković and J. Pečarić, Refinements of the Jensen-Mercer inequality for index set functions with applications, Revue d’analyse numérique et de théorie de l’approximation, 35, no. 1, pp. 71-82, 2006. \includegraphics[scale=0.1]{ext-link.png}

8

A.McD. Mercer, A variant of Jensen’s inequality, J. Ineq. Pure and Appl. Math., 4(4), 2003, Article 73.

9

M. Niezgoda, A generalization of Mercer’s result on convex functions, Nonlinear Anal., 71, pp.  2771–2779, 2009.

10

J. Pečarić and J. Perić, Improvement of the Giaccardi and the Petrović inequality and related Stolarsky type means, An. Univ. Craiova Ser. Mat. Inform., to appear.

11

J. Pečarić, F. Proschan and Y.L. Tong, Convex functions, Partial Orderings and Statistical Applications, Academic Press, New York, 1992.