<!DOCTYPE html>
<html lang="en">
<head>
<script>
  MathJax = { 
    tex: {
		    inlineMath: [['\\(','\\)']]
	} }
</script>
<script type="text/javascript" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js">
</script>
<meta name="generator" content="plasTeX" />
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Positive semi-definite matrices, exponential convexity for multiplicative majorization and related means of Cauchy’s type: Positive semi-definite matrices, exponential convexity for multiplicative majorization and related means of Cauchy’s type</title>
<link rel="stylesheet" href="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/styles/theme-white.css" />
</head>

<body>

<div class="wrapper">

<div class="content">
<div class="content-wrapper">


<div class="main-text">


<div class="titlepage">
<h1>Positive semi-definite matrices, exponential convexity for multiplicative majorization and related means of Cauchy’s type</h1>
<p class="authors">
<span class="author">Naveed Latif\(^\ast \) Josip Pečarić\(^\S \)</span>
</p>
<p class="date">March 1, 2010.</p>
</div>
<p>\(^\ast \) Abdus Salam School of Mathematical Sciences, GC University, Lahore, Pakistan, e-mail: <span class="tt">sincerehumtum@yahoo.com</span> </p>
<p>\(^\S \) University of Zagreb, Faculty Of Textile Technology Zagreb, Croatia, e-mail: <span class="tt">pecaric@mahazu.hazu.hr</span> </p>

<div class="abstract"><p> In this paper, we obtain new results concerning the generalizations of additive and multiplicative majorizations by means of exponential convexity. We prove positive semi-definiteness of matrices generated by differences deduced from majorization type results which implies exponential convexity and \(\log \)-convexity of these differences and also obtain Lyapunov’s and Dresher’s inequalities for these differences. We give some applications of additive and multiplicative majorizations. In addition, we introduce new means of Cauchy’s type and establish their monotonicity. </p>
<p><b class="bf">MSC.</b> 39B62, 26A51, 26B25 </p>
<p><b class="bf">Keywords.</b> Convex function, additive majorization, multiplicative majorization, applications of majorization, positive semi-definite matrix, exponential-convexity, \(\log \)-convexity, Lyapunov’s inequality, Dresher’s inequality, means of Cauchy’s type. </p>
</div>
<h1 id="a0000000002">1 Introduction and Preliminaries</h1>
<p> For fixed \( n\geq 2\) let </p>
<div class="displaymath" id="a0000000003">
  \begin{equation*}  \textbf{x} = (x_{1}, ..., x_{n}), \, \, \,  \textbf{y} = (y_{1}, ..., y_{n} ) \end{equation*}
</div>
<p> denote two n-tuples. Let </p>
<div class="displaymath" id="a0000000004">
  \begin{equation*}  x_{[1]}\, \geq \,  x_{[2]}\, \geq ...\geq \,  x_{[n]},\, \, \, \, \,  y_{[1]}\, \geq \, y_{[2]}\, \geq ...\geq \,  y_{[n]}, \end{equation*}
</div>
<div class="displaymath" id="a0000000005">
  \begin{equation*}  x_{(1)}\, \leq \,  x_{(2)}\, \leq ...\leq \,  x_{(n)},\, \, \, \, \,  y_{(1)}\, \leq \, y_{(2)}\, \leq ...\leq \,  y_{(n)} \end{equation*}
</div>
<p> be their ordered components. </p>
<p><div class="definition_thmwrapper " id="def1.1">
  <div class="definition_thmheading">
    <span class="definition_thmcaption">
    Definition
    </span>
    <span class="definition_thmlabel">1.1</span>
  </div>
  <div class="definition_thmcontent">
  <p> <span class="rm">(cf. <span class="cite">
	[
	<a href="#jpb" >10</a>
	]
</span>, p. 319)</span> \(\textbf{y}\) is said to majorize \(\textbf{x}\) (or \(\textbf{x}\) is said to be majorized by \(\textbf{y}\)), in symbol, \( \textbf{y} \succ \textbf{x} \), if </p>
<div class="equation" id="d1">
<p>
  <div class="equation_content">
    \begin{equation} \label{d1} \sum _{i=1}^m x_{[i]}\,  \leq \,  \sum _{i=1}^m y_{[i]} \end{equation}
  </div>
  <span class="equation_label">1.1</span>
</p>
</div>
<p> holds for \( m =1, 2, ..., n-1\) and </p>
<div class="equation" id="d2">
<p>
  <div class="equation_content">
    \begin{equation} \label{d2} \sum _{i=1}^n x_{i}\,  =\,  \sum _{i=1}^n y_{i}. \end{equation}
  </div>
  <span class="equation_label">1.2</span>
</p>
</div>
<p> Note that <span class="rm">(<a href="#d1">1.1</a>)</span> is equivalent to </p>
<div class="displaymath" id="a0000000006">
  \begin{equation*}  \sum _{i=n-m+1}^n x_{(i)}\,  \leq \,  \sum _{i=n-m+1}^n y_{(i)} \end{equation*}
</div>
<p> for \( m =1, 2, ..., n-1\). </p>

  </div>
</div> </p>
<p>Parallel to the concept of additive majorization is the notion of multiplicative majorization (also termed log-majorization). </p>
<p><div class="definition_thmwrapper " id="def1.21">
  <div class="definition_thmheading">
    <span class="definition_thmcaption">
    Definition
    </span>
    <span class="definition_thmlabel">1.2</span>
  </div>
  <div class="definition_thmcontent">
  <p> Let \(\textbf{x}\), \(\textbf{y}\) be two positive n-tuples, \(\textbf{y}\) is said to be multiplicatively majorized by \(\textbf{x}\), denoted by \(\textbf{x} \prec _{\times } \textbf{y}\) if </p>
<div class="equation" id="d1.21">
<p>
  <div class="equation_content">
    \begin{equation} \label{d1.21} \prod _{i=1}^m x_{[i]}\, \leq \,  \prod _{i=1}^m y_{[i]} \end{equation}
  </div>
  <span class="equation_label">1.3</span>
</p>
</div>
<p> holds for \( m =1, 2, ..., n-1\) and </p>
<div class="equation" id="d1.22">
<p>
  <div class="equation_content">
    \begin{equation} \label{d1.22} \prod _{i=1}^n x_{i}\, =\,  \prod _{i=1}^n y_{i}. \end{equation}
  </div>
  <span class="equation_label">1.4</span>
</p>
</div>
<p> Note that <span class="rm">(<a href="#d1.21">1.3</a>)</span> is equivalent to </p>
<div class="displaymath" id="a0000000007">
  \begin{equation*}  \prod _{i=n-m+1}^n x_{(i)}\,  \leq \,  \prod _{i=n-m+1}^n y_{(i)} \end{equation*}
</div>
<p> holds for \( m =1, 2, ..., n-1\). </p>

  </div>
</div> To differentiate the two types of majorization, we sometimes use the symbol \(\prec _{+}\) rather than \(\prec \) to denote (additive) majorization. </p>
<p>The following theorem is well-known as the majorization theorem and a convenient reference for its proof is in the book of Marshall and Olkin (1979) (<span class="cite">
	[
	<a href="#mo" >6</a>
	]
</span>, p.11) (see <span class="cite">
	[
	<a href="#jpb" >10</a>
	]
</span>, p.320): <div class="theorem_thmwrapper " id="th1.2">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">1.3</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let I be an interval in \(\mathbb {R}\) and \(\textbf{x}\), \(\textbf{y}\) be two n-tuples such that \(x_{i}\), \(y_{i}\) \(\in I\) \((i=1, ..., n)\). Then </p>
<div class="equation" id="t1.201">
<p>
  <div class="equation_content">
    \begin{equation} \label{t1.201} \sum _{i=1}^n \phi (x_{i})\,  \leq \,  \sum _{i=1}^n \phi (y_{i}) \end{equation}
  </div>
  <span class="equation_label">1.5</span>
</p>
</div>
<p> holds for every continuous convex function \( \phi :I\rightarrow \mathbb {R}\) iff \( \textbf{y} \succ \)<b class="bfseries">x</b>\( \) holds. </p>

  </div>
</div> <div class="remark_thmwrapper " id="rmk1.3">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">1.4</span>
  </div>
  <div class="remark_thmcontent">
  <p> <span class="cite">
	[
	<a href="#z" >5</a>
	]
</span> If \( \phi (x) \) is a strictly convex function then equality in (<a href="#t1.201">1.5</a>) is valid iff \( x_{[i]}=y_{[i]},\, i=1,..., n \).<span class="qed">â–¡</span></p>

  </div>
</div> The following theorem can be regarded as a generalization of the majorization theorem and is proved by Fuchs (1947) in <span class="cite">
	[
	<a href="#lf" >4</a>
	]
</span> (see <span class="cite">
	[
	<a href="#jpb" >10</a>
	]
</span>, p. 323): <div class="theorem_thmwrapper " id="th1.4">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">1.5</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(\textbf{x}\), \(\textbf{y}\) be two decreasing n-tuples and \(\textbf{p} = (p_1, ..., p_n)\) be a real n-tuple such that </p>
<div class="equation" id="t1.401">
<p>
  <div class="equation_content">
    \begin{equation} \label{t1.401} \sum _{i=1}^k p_{i}\, x_{i}\,  \leq \,  \sum _{i=1}^k p_{i}\, y_{i}\, \, \,  for \, \, \,  k=1, ..., n-1; \end{equation}
  </div>
  <span class="equation_label">1.6</span>
</p>
</div>
<p> and </p>
<div class="equation" id="t1.402">
<p>
  <div class="equation_content">
    \begin{equation} \label{t1.402} \sum _{i=1}^n p_{i}\, x_{i}\,  =\,  \sum _{i=1}^n p_{i}\, y_{i}. \end{equation}
  </div>
  <span class="equation_label">1.7</span>
</p>
</div>
<p> Then for every continuous convex function \( \phi :I\rightarrow \mathbb {R}\), we have </p>
<div class="equation" id="t1.403">
<p>
  <div class="equation_content">
    \begin{equation} \label{t1.403} \sum _{i=1}^n p_{i}\, \phi (x_{i})\,  \leq \,  \sum _{i=1}^n p_{i}\, \phi (y_{i}). \end{equation}
  </div>
  <span class="equation_label">1.8</span>
</p>
</div>

  </div>
</div> Let \(x(\tau ) \), \(y(\tau )\) be two real-valued functions defined on an interval \( [a, b] \) such that \( \int _a^s x(\tau ){\rm d}\tau \), \( \int _a^s y(\tau ){\rm d}\tau \) both exist for all s \( \in \) \( [a, b] \). <div class="definition_thmwrapper " id="def3.1">
  <div class="definition_thmheading">
    <span class="definition_thmcaption">
    Definition
    </span>
    <span class="definition_thmlabel">1.6</span>
  </div>
  <div class="definition_thmcontent">
  <p> <span class="rm">(cf. <span class="cite">
	[
	<a href="#jpb" >10</a>
	]
</span>, p. 324)</span> \(y(\tau )\) is said to majorize \(x(\tau )\), in symbol, \(y(\tau ) \succ x(\tau ) \), for \(\tau \) \( \in \) \( [a, b]\) if they are decreasing in \(\tau \) \(\in \) \([a, b]\) and </p>
<div class="equation" id="d3.1">
<p>
  <div class="equation_content">
    \begin{equation} \label{d3.1} \int _a^s x(\tau )\, {\rm d}\tau \leq \int _a^s y(\tau )\, {\rm d}\tau \,  \, \, \, \,  for \, \, s \in [a, b], \end{equation}
  </div>
  <span class="equation_label">1.9</span>
</p>
</div>
<p> and equality in <span class="rm">(<a href="#d3.1">1.9</a>)</span> holds for \(s=b\). </p>

  </div>
</div> The following theorem can be regarded as majorization theorem in integral case (see <span class="cite">
	[
	<a href="#jpb" >10</a>
	]
</span>, p. 325): <div class="theorem_thmwrapper " id="th3.2">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">1.7</span>
  </div>
  <div class="theorem_thmcontent">
  <p> \(y(\tau ) \succ x(\tau )\) for \(\tau \) \(\in \) \([a , b]\) iff they are decreasing in \([a, b]\) and </p>
<div class="equation" id="t3.2">
<p>
  <div class="equation_content">
    \begin{equation} \label{t3.2} \int _a^b \phi (x(\tau ))\, {\rm d}\tau \, \leq \,  \int _a^b \phi (y(\tau ))\, {\rm d}\tau \end{equation}
  </div>
  <span class="equation_label">1.10</span>
</p>
</div>
<p> holds for every \(\phi \) that is continuous and convex in \([a, b] \) such that the integrals exist. </p>

  </div>
</div> The following theorem is a simple consequence of Theorem 12.14 in <span class="cite">
	[
	<a href="#jpp" >11</a>
	]
</span> (see <span class="cite">
	[
	<a href="#jpb" >10</a>
	]
</span>, p. 328): <div class="theorem_thmwrapper " id="th3.3">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">1.8</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(x(\tau ),\,  y(\tau ) : [a, b]\rightarrow \mathbb {R}\), \(x(\tau )\) and \(y(\tau )\) are continuous and increasing and let \(G:[a, b]\rightarrow \mathbb {R}\) be a function of bounded variation. </p>
<ul class="itemize">
  <li><p>If </p>
<div class="equation" id="t3.301">
<p>
  <div class="equation_content">
    \begin{equation} \label{t3.301} \int _\nu ^b x(\tau )\,  {\rm d}G(\tau )\, \leq \,  \int _\nu ^b y(\tau ) \, {\rm d}G(\tau )\, \, \, \, for\, \,  all\, \,  \nu \in [a, b], \end{equation}
  </div>
  <span class="equation_label">1.11</span>
</p>
</div>
<p> and </p>
<div class="equation" id="t3.302">
<p>
  <div class="equation_content">
    \begin{equation} \label{t3.302} \int _a^b x(\tau )\,  {\rm d}G(\tau )\, =\,  \int _a^b y(\tau )\,  {\rm d}G(\tau ) \end{equation}
  </div>
  <span class="equation_label">1.12</span>
</p>
</div>
<p> hold then for every continuous convex function \(f\), we have </p>
<div class="equation" id="t3.303">
<p>
  <div class="equation_content">
    \begin{equation} \label{t3.303} \int _a^b f(x(\tau ))\,  {\rm d}G(\tau )\, \leq \,  \int _a^b f(y(\tau ))\,  {\rm d}G(\tau ). \end{equation}
  </div>
  <span class="equation_label">1.13</span>
</p>
</div>
</li>
  <li><p>If <span class="rm">(<a href="#t3.301">1.11</a>)</span> holds then <span class="rm">(<a href="#t3.303">1.13</a>)</span> holds for every continuous increasing convex function \(f\). </p>
</li>
</ul>

  </div>
</div> Let \(F(\tau )\), \(G(\tau )\) be two continuous and increasing functions for \(\tau \geq 0\) such that \(F(0) = G(0) = 0\) and define </p>
<div class="equation" id="pb101">
<p>
  <div class="equation_content">
    \begin{equation} \label{pb101} \overline{F}(\tau ) \, =\,  1 - F(\tau ),\, \, \, \, \, \, \, \, \, \, \, \, \overline{G}(\tau ) \, =\,  1 - G(\tau )\, \, \, \, \, \, for\, \, \, \, \, \, \, \tau \, \geq \, 0. \end{equation}
  </div>
  <span class="equation_label">1.14</span>
</p>
</div>
<p> <div class="definition_thmwrapper " id="pb">
  <div class="definition_thmheading">
    <span class="definition_thmcaption">
    Definition
    </span>
    <span class="definition_thmlabel">1.9</span>
  </div>
  <div class="definition_thmcontent">
  <p> <span class="rm">(cf. <span class="cite">
	[
	<a href="#jpb" >10</a>
	]
</span>, p. 330)</span> \(\overline{F}(\tau )\) is said to majorize \(\overline{G}(\tau )\), in symbol, \(\overline{F}(\tau ) \succ \overline{G}(\tau )\), for \(\tau \in [0, +\infty )\) if </p>
<div class="equation" id="pb01">
<p>
  <div class="equation_content">
    \begin{equation} \label{pb01} \int _0^s \overline{G}(\tau ) \, {\rm d}\tau \, \leq \, \int _0^s \overline{F}(\tau ) \, {\rm d}\tau \, \, \, \, \, \, \, \, \, \, for\,  all\, \, \, s\, >\, 0, \end{equation}
  </div>
  <span class="equation_label">1.15</span>
</p>
</div>
<p> and </p>
<div class="equation" id="pb02">
<p>
  <div class="equation_content">
    \begin{equation} \label{pb02} \int _0^\infty \overline{G}(\tau ) \, {\rm d}\tau \, =\, \int _0^\infty \overline{F}(\tau ) \, {\rm d}\tau \, <\, \infty . \end{equation}
  </div>
  <span class="equation_label">1.16</span>
</p>
</div>

  </div>
</div> The following result was obtained by Boland and Proschan (1986) <span class="cite">
	[
	<a href="#bp" >3</a>
	]
</span> (see <span class="cite">
	[
	<a href="#jpb" >10</a>
	]
</span>, p. 331): <div class="theorem_thmwrapper " id="pb1">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">1.10</span>
  </div>
  <div class="theorem_thmcontent">
  <p> \(\overline{F}(\tau ) \succ \overline{G}(\tau )\) for \(\tau \in [0, +\infty )\) holds iff </p>
<div class="equation" id="pb03">
<p>
  <div class="equation_content">
    \begin{equation} \label{pb03} \int _0^\infty \phi (\tau ) \, {\rm d}F(\tau ) \, \leq \, \int _0^\infty \phi (\tau ) \, {\rm d}G(\tau ) \end{equation}
  </div>
  <span class="equation_label">1.17</span>
</p>
</div>
<p> holds for all convex functions \(\phi \), provided the integrals are finite. </p>

  </div>
</div> <div class="definition_thmwrapper " id="a0000000008">
  <div class="definition_thmheading">
    <span class="definition_thmcaption">
    Definition
    </span>
    <span class="definition_thmlabel">1.11</span>
  </div>
  <div class="definition_thmcontent">
  <p>A function \(h:(a,b)\rightarrow \mathbb {R}\) is exponentially convex function if it is continuous and </p>
<div class="displaymath" id="a0000000009">
  \begin{eqnarray*}  \sum _{i,j=1}^{n}\xi _{i}\xi _{j}\, h\left(x_{i}\, +\, x_{j}\right)\, \geq \, 0 \end{eqnarray*}
</div>
<p> for all \(n\in \mathbb {N}\) and all choices \(\xi _{i}\in \mathbb {R}\) and \(x_{i}\in (a,b)\), \(i=1,...,n\) such that \(x_{i}+x_{j}\in (a,b)\), \(1\leq i,j\leq n\). </p>

  </div>
</div> The following proposition is given in <span class="cite">
	[
	<a href="#mnpj" >2</a>
	]
</span>: <div class="proposition_thmwrapper " id="prop">
  <div class="proposition_thmheading">
    <span class="proposition_thmcaption">
    Proposition
    </span>
    <span class="proposition_thmlabel">1.12</span>
  </div>
  <div class="proposition_thmcontent">
  <p> Let \(h:(a,b)\rightarrow \mathbb {R}\). The following propositions are equivalent. </p>
<ol class="enumerate">
  <li><p>\(h\) is exponentially convex. </p>
</li>
  <li><p>\(h\) is continuous and </p>
<div class="displaymath" id="a0000000010">
  \begin{eqnarray*}  \sum _{i,j=1}^{n}\xi _{i}\xi _{j}\, h\left(\tfrac {x_{i}\, +\, x_{j}}{2}\right)\, \geq \, 0, \end{eqnarray*}
</div>
<p> for every \(n\in \mathbb {N}\), every \(\xi _{i}\in \mathbb {R}\) and every \(x_{i},x_{j}\in (a,b)\), \(1\leq i,j\leq n\). </p>
</li>
</ol>

  </div>
</div> <div class="corollary_thmwrapper " id="corrnew">
  <div class="corollary_thmheading">
    <span class="corollary_thmcaption">
    Corollary
    </span>
    <span class="corollary_thmlabel">1.13</span>
  </div>
  <div class="corollary_thmcontent">
  <p> If \(h\) is exponentially convex then </p>
<div class="displaymath" id="a0000000011">
  \begin{eqnarray*}  det\left[h\left(\tfrac {x_{i}\, +\, x_{j}}{2}\right)\right]_{i,j=1}^{n}\, \geq \, 0, \end{eqnarray*}
</div>
<p> for every \(n\in \mathbb {N}\) and every \(x_{i}\in (a,b)\), \(i=1,...,n\). </p>

  </div>
</div> <div class="corollary_thmwrapper " id="corr">
  <div class="corollary_thmheading">
    <span class="corollary_thmcaption">
    Corollary
    </span>
    <span class="corollary_thmlabel">1.14</span>
  </div>
  <div class="corollary_thmcontent">
  <p> If \(h:(a,b)\rightarrow \mathbb {R^{+}}\) is exponentially convex function then \(h\) is a \(\log \)-convex function. </p>

  </div>
</div> The following lemma is equivalent to definition of convex function (see <span class="cite">
	[
	<a href="#jpb" >10</a>
	]
</span>, p. 2): <div class="lemma_thmwrapper " id="lem2.3">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">1.15</span>
  </div>
  <div class="lemma_thmcontent">
  <p> If \(f\) is convex on an interval \(I\subseteq \mathbb {R}\), then </p>
<div class="displaymath" id="a0000000012">
  \begin{equation*}  f(s_1)(s_3-s_2)+f(s_2)(s_1-s_3)+f(s_3)(s_2-s_1)\geq 0, \end{equation*}
</div>
<p> holds for every \(s_1{\lt}s_2{\lt}s_3\), \(s_1,s_2,s_3\in I\). </p>

  </div>
</div> In <span class="cite">
	[
	<a href="#mnop" >1</a>
	]
</span>, the following result is proved: <div class="theorem_thmwrapper " id="th1.5">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">1.16</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let <span class="rm"><b class="bfseries">x</b></span> and <span class="rm"><b class="bfseries">y</b></span> be two positive \(n\)-tuples, <span class="rm"><b class="bfseries">y</b></span> \(\succ \) <span class="rm"><b class="bfseries">x</b></span>, </p>
<div class="displaymath" id="a0000000013">
  \begin{eqnarray*}  \Lambda _t\, =\, \Lambda _t\textmd{\rm (\textbf{x};\, \textbf{y})}\, :=\, \sum _{i=1}^n\varphi _t\left(y_i\right)\, -\,  \sum _{i=1}^n\varphi _t\left(x_i\right), \end{eqnarray*}
</div>
<p> and all \(x_{[i]}\)’s and \(y_{[i]}\)’s are not equal. </p>
<p>Then the following statements are valid: </p>
<ol class="enumerate">
  <li><p>For every \(n\in \mathbb {N}\) and \(s_{1},...,s_{n}\in \mathbb {R}\), the matrix \(\left[\Lambda _{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix. Particularly </p>
<div class="equation" id="nasa1">
<p>
  <div class="equation_content">
    \begin{equation} \label{nasa1} \mathrm{det}\left[\Lambda _{\tfrac {s_{i}\, +\, s_{j}}{2}}\right]_{i,j=1}^{k}\, \geq \, 0 \end{equation}
  </div>
  <span class="equation_label">1.18</span>
</p>
</div>
<p> for \(k=1,...,n.\) </p>
</li>
  <li><p>The function \(s\rightarrow \Lambda _{s}\) is exponentially convex. </p>
</li>
  <li><p>The function \(s\rightarrow \Lambda _{s}\) is a \(\log \)-convex on \(\mathbb {R}\) and the following inequality holds for \(-\infty \, {\lt}\, r\, {\lt}\, s\, {\lt}\, t\, {\lt}\, \infty :\) </p>
<div class="equation" id="1.501">
<p>
  <div class="equation_content">
    \begin{equation} \label{1.501} \Lambda _s^{t-r}\,  \le \,  \Lambda _r^{t-s}\, \Lambda _t^{s-r}. \end{equation}
  </div>
  <span class="equation_label">1.19</span>
</p>
</div>
</li>
</ol>

  </div>
</div> Similar results and corresponding Cauchy means are proved in <span class="cite">
	[
	<a href="#mnop" >1</a>
	]
</span> with a stronger condition that \(\textbf{x}\) and \(\textbf{y}\) are positive n-tuples. </p>
<p>In this paper we give results for generalizations of additive majorization as in <span class="cite">
	[
	<a href="#mnop" >1</a>
	]
</span> and multiplicative majorization. Moreover, several applications of majorization are obtained by using following important example </p>
<div class="displaymath" id="a0000000014">
  \begin{equation*}  \big{(}\sum _{i=1}^n x_i, 0, ..., 0\big{)}\succ (x_{1}, ..., x_{n}). \end{equation*}
</div>
<p> We also give some applications of additive and multiplicative majorizations.<br />In this connection, the following remark in <span class="cite">
	[
	<a href="#py" >7</a>
	]
</span> is important:<br />“majorization theory is the underlying mathematical theory on which the framework hings. It allows the transformation of the originally complicated matrix-valued non-convex problem into a simple scalar problem." </p>
<p>It was shown in <span class="cite">
	[
	<a href="#py" >7</a>
	]
</span> that additive majorization relation plays a key role in the design of linear MIMO transceivers, whereas the multiplicative majorization relation is the basis for nonlinear decision-feedback MIMO transceivers. </p>
<h1 id="a0000000015">2 The case of non-negative sequences and functions</h1>
<p><div class="lemma_thmwrapper " id="lem2.1">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">2.1</span>
  </div>
  <div class="lemma_thmcontent">
  <p> Define the function </p>
<div class="equation" id="jn">
<p>
  <div class="equation_content">
    \begin{equation} \label{jn} \overline{\varphi }_{s}(x)\, :=\, \left\{  \begin{array}{ll} \tfrac {x^{s}}{s(s-1)}, &  \hbox{s $\neq $ 1;} \\ x\log x, &  \hbox{s = 1,} \end{array} \right. \end{equation}
  </div>
  <span class="equation_label">2.20</span>
</p>
</div>
<p> where \( s \in \mathbb {R^{+}}\).<br />Then \(\overline{\varphi }''_{s}(x)=x^{s-2}\), that is, \(\overline{\varphi }_{s}(x)\) is convex for \(x{\gt}0.\) </p>

  </div>
</div> In our results we use the notation \(0\, {\rm log}\, 0=0\). <div class="theorem_thmwrapper " id="th2.4">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">2.2</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let <span class="rm"><b class="bfseries">x</b></span> and <span class="rm"><b class="bfseries">y</b></span> be two non-negative \(n\)-tuples, \(\textbf{y} \succ \textbf{x}\) , </p>
<div class="displaymath" id="a0000000016">
  \[ \overline{\Lambda }_t \, =\,  \overline{\Lambda }_t\textmd{\rm (\textbf{x};\, \, \textbf{y})}\,  :=\,  \sum _{i=1}^n\, \overline{\varphi }_t(y_i) - \sum _{i=1}^n\,  \overline{\varphi }_t(x_i), \]
</div>
<p> and all \(x_{[i]}\)’s and \(y_{[i]}\)’s are not equal<br />Then the following statements are valid: </p>
<ol class="enumerate">
  <li><p>For every \(n\in \mathbb {N}\) and \(s_{1},...,s_{n}\in \mathbb {R^{+}}\), the matrix \(\left[\overline{\Lambda }_{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix. Particularly </p>
<div class="equation" id="nasa1">
<p>
  <div class="equation_content">
    \begin{equation} \label{nasa1} \mathrm{det}\left[\overline{\Lambda }_{\tfrac {s_{i}\, +\, s_{j}}{2}}\right]_{i,j=1}^{k}\, \geq \, 0 \end{equation}
  </div>
  <span class="equation_label">2.21</span>
</p>
</div>
<p> for \(k=1,...,n.\) </p>
</li>
  <li><p>The function \(s\rightarrow \overline{\Lambda }_{s}\) is exponentially convex. </p>
</li>
  <li><p>The function \(s\rightarrow \overline{\Lambda }_{s}\) is a \(\log \)-convex on \(\mathbb {R^{+}}\) and the following inequality holds for \(0\, {\lt}\, r\, {\lt}\, s\, {\lt}\, t\, {\lt}\, \infty :\) </p>
<div class="equation" id="mohib">
<p>
  <div class="equation_content">
    \begin{equation} \label{mohib} \left(\overline{\Lambda _s}\right)^{t-r}\,  \le \,  \left(\overline{\Lambda _r}\right)^{t-s}\, \left(\overline{\Lambda _t}\right)^{s-r}. \end{equation}
  </div>
  <span class="equation_label">2.22</span>
</p>
</div>
</li>
</ol>
<p> <div class="proof_wrapper" id="a0000000017">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> (a) Consider the function </p>
<div class="displaymath" id="a0000000018">
  \begin{equation*}  \mu (x)\, =\, \sum _{i,j}^{k}\, u_{i}u_{j}\, \varphi _{s_{ij}}(x) \end{equation*}
</div>
<p> for \(k=1,...,n\), \(x{\gt}0\), \(u_{i}\in \mathbb {R}\), \(s_{ij}\in \mathbb {R^{+}}\), where \(s_{ij}=\tfrac {s_{i}\, +\, s_{j}}{2}\) and \(\varphi _{s_{ij}}\) is defined in (<a href="#jn">2.20</a>). </p>
<p>We have </p>
<div class="displaymath" id="a0000000019">
  \begin{align*}  \mu ”(x)& =\, \sum _{i,j}^{k}\, u_{i}u_{j}\, x^{{s_{ij}\, -\, 2}}=\\ & =\, \left(\sum _{i}^{k}\, u_{i}\, x^{\tfrac {s_{i}}{2}\, -\, 1}\right)^{2}\, \geq \, 0,\, \, \, x\geq 0. \end{align*}
</div>
<p> This shows that \(\mu \) is a convex function for \(x\geq 0\). </p>
<p>Using Theorem <a href="#th1.2">1.3</a>, </p>
<div class="displaymath" id="a0000000020">
  \begin{eqnarray*}  \sum _{m=1}^n\mu \left(y_m\right)\, -\, \sum _{m=1}^n{\mu \left(x_m\right)}\, \ge \, 0. \end{eqnarray*}
</div>
<p> This implies </p>
<div class="displaymath" id="a0000000021">
  \begin{eqnarray*}  \sum _{m=1}^{n}\left(\sum _{i,j}^{k}\, u_{i}u_{j}\, \overline{\varphi }_{s_{ij}}\left(y_m\right)\right)\, -\,  \sum _{m=1}^{n}\left(\sum _{i,j}^{k}\, u_{i}u_{j}\, \overline{\varphi }_{s_{ij}}\left(x_m\right)\right)\, \ge \, 0, \end{eqnarray*}
</div>
<p> or equivalently </p>
<div class="displaymath" id="a0000000022">
  \begin{eqnarray*}  \sum _{i,j}^{k}\, u_{i}u_{j}\, \overline{\Lambda }_{s_{ij}}\, \geq \, 0. \end{eqnarray*}
</div>
<p> From last inequality, it follows that the matrix \(\left[\overline{\Lambda }_{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix, that is, (<a href="#nasa1">2.21</a>) is valid.<br />(b) Note that \(\overline{\Lambda }_s\) is continuous for \(s \in \mathbb {R^{+}}\). Then by using Proposition <a href="#prop">1.12</a>, we get exponentially convexity of the function \(s\rightarrow \overline{\Lambda }_{s}\).<br />(c) Since \(\overline{\varphi }_{t}(x)\) is continuous and strictly convex function for \(x{\gt}0\) and all \(x_{[i]}\)’s and \(y_{[i]}\)’s are not equal, therefore by Theorem <a href="#th1.2">1.3</a> with \(\phi =\overline{\varphi }_{t}\), we have </p>
<div class="displaymath" id="a0000000023">
  \begin{equation*}  \sum _{i=1}^n \overline{\varphi }_{t}\left(y_{i}\right)\, {\gt}\, \sum _{i=1}^n \overline{\varphi }_{t}\left(x_{i}\right). \end{equation*}
</div>
<p> This implies </p>
<div class="displaymath" id="a0000000024">
  \begin{equation*}  \overline{\Lambda }_t\, =\,  \overline{\Lambda }_t\textmd{\rm (\textbf{x};\, \textbf{y})}\,  =\,  \sum _{i=1}^n\overline{\varphi }_t\left(y_i\right)\, -\, \sum _{i=1}^n\overline{\varphi }_t\left(x_i\right)\, {\gt}\, 0, \end{equation*}
</div>
<p> that is, \(\overline{\Lambda }_t\) is positive-valued function. </p>
<p>A simple consequence of Corollary <a href="#corr">1.14</a> is that \(\overline{\Lambda }_s\) is \(\log \)-convex, then by definition </p>
<div class="displaymath" id="a0000000025">
  \begin{equation*}  \log \left(\overline{\Lambda _{s}}\right)^{t-r}\, \leq \,  \log \left(\overline{\Lambda _{r}}\right)^{t-s}\, +\, \log \left(\overline{\Lambda _{t}}\right)^{s-r}, \end{equation*}
</div>
<p> which is equivalent to (<a href="#mohib">2.22</a>). <div class="proof_wrapper" id="a0000000026">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> </p>

  </div>
</div> As in <span class="cite">
	[
	<a href="#mnop" >1</a>
	]
</span>, we define the following means of Cauchy type. </p>
<div class="displaymath" id="eq01">
  \begin{align} \label{eq01} M_{t, s}& =\,  \Big(\tfrac {\overline{\Lambda }_t}{\overline{\Lambda }_s}\Big)^{\tfrac {1}{t-s}},\, \, \, \, \, \, \, \, \, \, \, t,\, s\in \mathbb {R^{+}},\, \, \, s\neq t.\\ M_{s, s}& =\,  \exp \Big( \tfrac {\sum _{i=1}^n {y_i}^{s}\,  \log y_i - \sum _{i=1}^n {x_i}^{s}\,  \log x_i}{\sum _{i=1}^n {y_i}^{s}-\sum _{i=1}^n {x_i}^{s}} - \tfrac {2s-1}{s(s-1)}\Big) , \, \, \,  s\neq 1.\nonumber \\ M_{1, 1}& =\,  \exp \Big( \tfrac {\sum _{i=1}^n {y_i}\,  (\log y_i)^{2} - \sum _{i=1}^n {x_i}\,  (\log x_i)^{2}}{2\big(\sum _{i=1}^n {y_i}\, \log y_{i}-\sum _{i=1}^n {x_i}\, \log x_{i}\big)} - 1 \Big).\nonumber \end{align}
</div>
<p> <div class="theorem_thmwrapper " id="thm2.5">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">2.3</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(t, s, u, v \in \mathbb {R^{+}}\) such that \(t\leq u,\) \(s\leq v\), then the following inequality is valid </p>
<div class="equation" id="t2.5">
<p>
  <div class="equation_content">
    \begin{equation} \label{t2.5} M_{t,s}\,  \leq \,  M_{u, v}. \end{equation}
  </div>
  <span class="equation_label">2.24</span>
</p>
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000027">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> Since \(\overline{\Lambda }_t\) is \(\log \)-convex, therefore by \((\ref{eq01})\) we get \((\ref{t2.5}).\) <div class="proof_wrapper" id="a0000000028">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> <div class="theorem_thmwrapper " id="th2.6">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">2.4</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let <span class="rm"><b class="bfseries">x</b></span> and <span class="rm"><b class="bfseries">y</b></span> be two non-negative decreasing \(n\)-tuples, \(\textbf{p}=(p_1,...,p_n)\) be a real n-tuple and let </p>
<div class="displaymath" id="a0000000029">
  \[ \overline{\lambda }_t \, =\,  \overline{\lambda }_t\textmd{\rm (\textbf{x},\, \textbf{y};\, \textbf{p})}\,  :=\,  \sum _{i=1}^np_i\, \overline{\varphi }_t(y_i) - \sum _{i=1}^np_i\,  \overline{\varphi }_t(x_i), \]
</div>
<p> such that conditions <span class="rm">(<a href="#t1.401">1.6</a>)</span> and <span class="rm">(<a href="#t1.402">1.7</a>)</span> are satisfied and \(\overline{\lambda }_t\) is positive. Then the following statements are valid: </p>
<ol class="enumerate">
  <li><p>For every \(n\in \mathbb {N}\) and \(s_{1},...,s_{n}\in \mathbb {R^{+}}\), the matrix \(\left[\overline{\lambda }_{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix. Particularly </p>
<div class="equation" id="a0000000030">
<p>
  <div class="equation_content">
    \begin{equation}  \mathrm{det}\left[\overline{\lambda }_{\tfrac {s_{i}\, +\, s_{j}}{2}}\right]_{i,j=1}^{k}\, \geq \, 0 \end{equation}
  </div>
  <span class="equation_label">2.25</span>
</p>
</div>
<p> for \(k=1,...,n.\) </p>
</li>
  <li><p>The function \(s\rightarrow \overline{\lambda }_{s}\) is exponentially convex. </p>
</li>
  <li><p>The function \(s\rightarrow \overline{\lambda }_{s}\) is a \(\log \)-convex on \(\mathbb {R^{+}}\) and the following inequality holds for \(0\, {\lt}\, r\, {\lt}\, s\, {\lt}\, t\, {\lt}\, \infty :\) </p>
<div class="equation" id="a0000000031">
<p>
  <div class="equation_content">
    \begin{equation}  \left(\overline{\lambda }_s\right)^{t-r}\,  \le \,  \left(\overline{\lambda }_r\right)^{t-s}\, \left(\overline{\lambda }_t\right)^{s-r}. \end{equation}
  </div>
  <span class="equation_label">2.26</span>
</p>
</div>
</li>
</ol>
<p><div class="proof_wrapper" id="a0000000032">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in the proof of Theorem <a href="#th2.4">2.2</a>, we use Theorem <a href="#th1.4">1.5</a> instead of Theo-<br />rem <a href="#th1.2">1.3</a>. <div class="proof_wrapper" id="a0000000033">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> </p>

  </div>
</div> As in <span class="cite">
	[
	<a href="#mnop" >1</a>
	]
</span>, we define the following means of Cauchy type. </p>
<div class="displaymath" id="eq02">
  \begin{align} \label{eq02} \widetilde{M}_{t, s} \, & =\, \Big(\tfrac {\overline{\lambda }_t}{\overline{\lambda }_s}\Big)^{\tfrac {1}{t-s}},\, \, \, \, \, \, \, \, \, t,\, s\in \mathbb {R^{+}},\, \, \, s\neq t.\\ \widetilde{M}_{s, s}\, & =\,  \exp \Big( \tfrac {\sum _{i=1}^n p_i\,  {y_i}^{s} \, \log y_i - \sum _{i=1}^n p_i\,  {x_i}^{s}\,  \log x_i}{\sum _{i=1}^n p_i \, {y_i}^{s}-\sum _{i=1}^n p_i\,  {x_i}^{s}} - \tfrac {2s-1}{s(s-1)}\Big) , \, \, \,  s\neq 1.\nonumber \\ \widetilde{M}_{1, 1}\, &  =\,  \exp \Big(\tfrac {\sum _{i=1}^n p_i\,  y_i\,  (\log y_i)^{2}- \sum _{i=1}^n p_i\,  x_i\,  (\log x_i)^{2}}{2 \big( \sum _{i=1}^n p_i\,  y_i\,  \log y_i - \sum _{i=1}^n p_i\,  x_i\,  \log x_i \big)}- 1\Big).\nonumber \end{align}
</div>
<p> <div class="theorem_thmwrapper " id="thm2.7">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">2.5</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(t, s, u, v \in \mathbb {R^{+}}\) such that \(t\leq u,\) \(s\leq v\), then the following inequality is valid </p>
<div class="equation" id="t2.7">
<p>
  <div class="equation_content">
    \begin{equation} \label{t2.7} \widetilde{M}_{t,s}\,  \leq \,  \widetilde{M}_{u, v}. \end{equation}
  </div>
  <span class="equation_label">2.28</span>
</p>
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000034">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> Since \(\overline{\lambda }_t\) is \(\log \)-convex, therefore by \((\ref{eq02})\) we get \((\ref{t2.7}).\) <div class="proof_wrapper" id="a0000000035">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> <div class="corollary_thmwrapper " id="cor2.8">
  <div class="corollary_thmheading">
    <span class="corollary_thmcaption">
    Corollary
    </span>
    <span class="corollary_thmlabel">2.6</span>
  </div>
  <div class="corollary_thmcontent">
  <p> Let <span class="rm"><b class="bfseries">x</b></span> be non-negative \(n\)-tuple and </p>
<div class="displaymath" id="a0000000036">
  \[ \digamma _t\,  =\,  \digamma _t\textmd{\rm (\textbf{x})}\,  :=\,  \overline{\varphi }_t\Big{(}\sum _{i=1}^n x_i\Big{)} - \sum _{i=1}^n \, \overline{\varphi }_t(x_i), \]
</div>
<p> Then the following statements are valid: </p>
<ol class="enumerate">
  <li><p>For every \(n\in \mathbb {N}\) and \(s_{1},...,s_{n}\in \mathbb {R^{+}}\), the matrix \(\left[\digamma _{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix. Particularly </p>
<div class="equation" id="nasa1023">
<p>
  <div class="equation_content">
    \begin{equation} \label{nasa1023} \mathrm{det}\left[\digamma _{\tfrac {s_{i}\, +\, s_{j}}{2}}\right]_{i,j=1}^{k}\, \geq \, 0 \end{equation}
  </div>
  <span class="equation_label">2.29</span>
</p>
</div>
<p> for \(k=1,...,n.\) </p>
</li>
  <li><p>The function \(s\rightarrow \digamma _{s}\) is exponentially convex. </p>
</li>
  <li><p>The function \(s\rightarrow \digamma _{s}\) is a \(\log \)-convex on \(\mathbb {R^{+}}\) and the following inequality holds for \(0\, {\lt}\, r\, {\lt}\, s\, {\lt}\, t\, {\lt}\, \infty :\) </p>
<div class="equation" id="mohib0123">
<p>
  <div class="equation_content">
    \begin{equation} \label{mohib0123} \digamma _s^{t-r}\,  \le \,  \digamma _r^{t-s}\, \digamma _t^{s-r}. \end{equation}
  </div>
  <span class="equation_label">2.30</span>
</p>
</div>
</li>
</ol>

  </div>
</div> <div class="proof_wrapper" id="a0000000037">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> Set \({\rm \textbf{y}}= \big{(}\sum _{i=1}^n x_i, 0, ..., 0\big{)}\) and \({\rm \textbf{x}}= (x_{1}, ..., x_{n})\) in Theorem \({\ref{th2.4}}\), we get our required results. <div class="proof_wrapper" id="a0000000038">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> We define the following means of Cauchy type. </p>
<div class="displaymath" id="eq03">
  \begin{align} \label{eq03} \Delta _{t, r}(x) \, & =\, \Big(\tfrac {\digamma _t(x)}{\digamma _r(x)}\Big)^{\tfrac {1}{t-r}},\, \, \, \, \, \, \, \, t,\, r\in \mathbb {R^{+}},\, \, \, r\neq t.\\ \Delta _{r, r}(x)& =\, \exp \Big( \tfrac {\big{(} \sum _{i=1}^n x_i \big{)}^{r}\,  \log \big{(}\sum _{i=1}^n x_i \big{)} - \sum _{i=1}^n {x_i}^{r}\,  \log x_i}{\big{(} \sum _{i=1}^n x_i \big{)}^{r}-\sum _{i=1}^n {x_i}^{r}} - \tfrac {2r-1}{r(r-1)}\Big),\  r\neq 1.\nonumber \\ \Delta _{1, 1}(x)\,  & =\,  \exp \Big(\tfrac {(\sum _{i=1}^n x_i)\,  (\log (\sum _{i=1}^n x_i))^{2}- \sum _{i=1}^n x_i \, (\log x_i)^{2}}{2 \big( (\sum _{i=1}^n x_i)\,  (\log (\sum _{i=1}^n x_i)) - \sum _{i=1}^n x_i\,  \log x_i \big)}- 1\Big).\nonumber \end{align}
</div>
<p> <div class="corollary_thmwrapper " id="cor2.9">
  <div class="corollary_thmheading">
    <span class="corollary_thmcaption">
    Corollary
    </span>
    <span class="corollary_thmlabel">2.7</span>
  </div>
  <div class="corollary_thmcontent">
  <p> Let \(t, r, u, v \in \mathbb {R^{+}}\) such that \(t\leq u,\) \(r\leq v\), then the following inequality is valid </p>
<div class="equation" id="c2.9">
<p>
  <div class="equation_content">
    \begin{equation} \label{c2.9} \Delta _{t, r}(x)\,  \leq \,  \Delta _{u, v}(x). \end{equation}
  </div>
  <span class="equation_label">2.32</span>
</p>
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000039">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> Since \(\digamma _t(x)\) is log-convex, therefore by \((\ref{eq03})\) we get \((\ref{c2.9}).\) <div class="proof_wrapper" id="a0000000040">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> We define the following Cauchy means, which are similar to <span class="cite">
	[
	<a href="#ja" >9</a>
	]
</span> for \(p_{i}=1\), \(i=1,..., n\). </p>
<div class="equation" id="eq04">
<p>
  <div class="equation_content">
    \begin{equation} \label{eq04} \Upsilon ^{s}_{t, r}(x)\, =\, \Big(\tfrac {r(r-s)}{t(t-s)}.\tfrac {\big(\sum _{i=1}^n x_{i}^{s}\big)^{\tfrac {t}{s}}-\sum _{i=1}^nx_{i}^{t}}{\big(\sum _{i=1}^n x_{i}^{s}\big)^{\tfrac {r}{s}}-\sum _{i=1}^nx_{i}^{r}}\Big)^{\tfrac {1}{t-r}}, \end{equation}
  </div>
  <span class="equation_label">2.33</span>
</p>
</div>
<p> \(t,\, r,\, s\in \mathbb {R^{+}},\, \, t\neq r,\, \, t\neq s,\, \, r\neq s\). </p>
<div class="displaymath" id="a0000000041">
  \begin{equation*}  \Upsilon ^{s}_{s, r}(x)\, =\,  \Big( \tfrac {r(r-s)}{s^{2}}.\tfrac {\sum _{i=1}^n x_{i}^{s}\, \log \big{(}\sum _{i=1}^n x_i^{s} \big{)}-s \sum _{i=1}^nx_i^{s}\, \log x_{i}}{\big(\sum _{i=1}^n x_{i}^{s}\big)^{\tfrac {r}{s}}-\sum _{i=1}^nx_{i}^{r}}\Big)^{\tfrac {1}{s-r}} , \, \, \,  r\neq s. \end{equation*}
</div>
<div class="displaymath" id="a0000000042">
  \begin{align*}  \Upsilon ^{s}_{r, r}(x)=\, \exp \Big( \tfrac {\big{(} \sum _{i=1}^n x_i^{s} \big{)}^{\tfrac {r}{s}}\,  \log \big{(}\sum _{i=1}^n x_i^{s} \big{)} -s \sum _{i=1}^n {x_i}^{r}\,  \log x_i}{s \Big(\big{(} \sum _{i=1}^n x_i^{s} \big{)}^{\tfrac {r}{s}}-\sum _{i=1}^n {x_i}^{r}\Big)} - \tfrac {2r-s}{r(r-s)}\Big),\\ r\neq s. \end{align*}
</div>
<div class="displaymath" id="a0000000043">
  \begin{equation*}  \Upsilon ^{s}_{s, s}(x)\, =\,  \exp \Big( \tfrac {\big{(} \sum _{i=1}^n x_i^{s} \big{)}\,  \big(\log \big{(}\sum _{i=1}^n x_i^{s} \big{)}\big)^{2} - s^{2} \sum _{i=1}^n {x_i}^{s}\,  (\log x_i)^{2}}{2s \Big( \big{(} \sum _{i=1}^n x_i^{s} \big{)}\,  \log \big( \sum _{i=1}^n x_i^{s} \big)-s \sum _{i=1}^n {x_i}^{s}\,  \log x_{i}\Big)} - \tfrac {1}{s}\Big). \end{equation*}
</div>
<p> <div class="corollary_thmwrapper " id="cor2.10">
  <div class="corollary_thmheading">
    <span class="corollary_thmcaption">
    Corollary
    </span>
    <span class="corollary_thmlabel">2.8</span>
  </div>
  <div class="corollary_thmcontent">
  <p> Let \(t, r, u, v \in \mathbb {R^{+}}\) such that \(t\leq u,\) \(r\leq v\), then the following inequality is valid </p>
<div class="equation" id="c2.10">
<p>
  <div class="equation_content">
    \begin{equation} \label{c2.10} \Upsilon ^{s}_{t, r}(x)\, \leq \,  \Upsilon ^{s}_{u, v}(x) \end{equation}
  </div>
  <span class="equation_label">2.34</span>
</p>
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000044">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> Let </p>
<div class="equation" id="tas">
<p>
  <div class="equation_content">
    \begin{equation} \label{tas} \digamma _{t}(x)\, :=\, \left\{  \begin{array}{ll} \tfrac {1}{t(t-1))}\Big( \Big(\sum _{i=1}^n x_i \Big)^{t} - \sum _{i=1}^n x_i^{t} \Big), &  \hbox{t $\neq $ 1;} \\ \sum _{i=1}^n x_{i}\, \log \big( \sum _{i=1}^n x_{i}\big) - \sum _{i=1}^n x_{i}\, \log x_{i}, &  \hbox{t = 1.} \end{array} \right. \end{equation}
  </div>
  <span class="equation_label">2.35</span>
</p>
</div>
<p> Using corollary <a href="#cor2.9">2.7</a>, we have </p>
<div class="displaymath" id="a0000000045">
  \begin{equation*}  \Big( \tfrac {r(r-1)}{t(t-1)}.\tfrac {\big( \sum _{i=1}^n x_{i} \big)^{t} - \sum _{i=1}^n x_i^{t} }{\big( \sum _{i=1}^n x_{i} \big)^{r} - \sum _{i=1}^n x_i^{r}} \Big)^{\tfrac {1}{t-r}} \leq \, \Big( \tfrac {u(u-1)}{v(v-1)}.\tfrac {\big( \sum _{i=1}^n x_{i} \big)^{v} - \sum _{i=1}^n x_i^{v} }{\big( \sum _{i=1}^n x_{i} \big)^{u} - \sum _{i=1}^n x_i^{u}} \Big)^{\tfrac {1}{v-u}}. \end{equation*}
</div>
<p> Since \(s{\gt}0\) by substituting \(x_{i}= x_{i}^{s}\), \(t=\tfrac {t}{s}\), \(r=\tfrac {r}{s}\), \(u=\tfrac {u}{s}\) and \(v=\tfrac {v}{s}\) in above inequality, we get </p>
<div class="displaymath" id="a0000000046">
  \begin{equation*}  \Big( \tfrac {r(r-s)}{t(t-s)}.\tfrac {\big( \sum _{i=1}^n x_{i}^{s} \big)^{\tfrac {t}{s}} - \sum _{i=1}^n x_i^{t} }{\big( \sum _{i=1}^n x_{i}^{s} \big)^{\tfrac {r}{s}} - \sum _{i=1}^n x_i^{r}} \Big)^{\tfrac {s}{t-r}}\leq \, \Big( \tfrac {u(u-s)}{v(v-s)}.\tfrac {\big( \sum _{i=1}^n x_{i}^{s} \big)^{\tfrac {v}{s}} - \sum _{i=1}^n x_i^{v} }{\big( \sum _{i=1}^n x_{i}^{s} \big)^{\tfrac {u}{s}} - \sum _{i=1}^n x_i^{u}} \Big)^{\tfrac {s}{v-u}}. \end{equation*}
</div>
<p> By raising power \(\tfrac {1}{s}\), we get (\(\ref{c2.10}\)). <div class="proof_wrapper" id="a0000000047">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> </p>
<p><div class="remark_thmwrapper " id="2.11">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">2.9</span>
  </div>
  <div class="remark_thmcontent">
  <p> Let us note that in <span class="cite">
	[
	<a href="#ja1" >8</a>
	]
</span>, the following function \(\phi _{t}= t\, \digamma _{t}\) was considered. It was proved that </p>
<div class="equation" id="2.111">
<p>
  <div class="equation_content">
    \begin{equation} \label{2.111} \phi _{s}^{t-r}\,  \leq \,  \phi _{r}^{t-s}\, \, \phi _{t}^{s-r}. \end{equation}
  </div>
  <span class="equation_label">2.36</span>
</p>
</div>
<p> In <span class="cite">
	[
	<a href="#ja" >9</a>
	]
</span>, it was proved that this implies </p>
<div class="displaymath" id="a0000000048">
  \begin{equation*}  \digamma _{s}^{t-r}\,  \leq \,  \tfrac {s^{t-r}}{r^{t-s}t^{s-r}}\, \,  \digamma _{r}^{t-s}\, \,  \digamma _{t}^{s-r}. \end{equation*}
</div>
<p> Since \(\tfrac {s^{t-r}}{r^{t-s}t^{s-r}}\, {\lt}\,  1\), we have that (<a href="#2.111">2.36</a>) is better than (<a href="#mohib0123">2.30</a>).<span class="qed">â–¡</span></p>

  </div>
</div> </p>
<p><div class="theorem_thmwrapper " id="th3.4">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">2.10</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(x(\tau )\) and \(y(\tau ) \) be two non-negative real-valued functions defined on an interval \([a, b]\), decreasing in \([a, b]\), \(y(\tau ) \succ x(\tau ) \) and </p>
<div class="displaymath" id="a0000000049">
  \begin{equation*}  \overline{\beta }_{t}(x(\tau );\, y(\tau ))\, :=\,  \int _a^b\overline{\varphi }_t(y(\tau ))\, {\rm d}\tau - \int _a^b\overline{\varphi }_t(x(\tau ))\, {\rm d}\tau , \end{equation*}
</div>
<p> and \(\overline{\beta }_{t}\) is positive. </p>
<p>Then the following statements are valid: </p>
<ol class="enumerate">
  <li><p>For every \(n\in \mathbb {N}\) and \(s_{1},...,s_{n}\in \mathbb {R^{+}}\), the matrix \(\left[\overline{\beta }_{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix. Particularly </p>
<div class="equation" id="nasa10231987">
<p>
  <div class="equation_content">
    \begin{equation} \label{nasa10231987} \mathrm{det}\left[\overline{\beta }_{\tfrac {s_{i}\, +\, s_{j}}{2}}\right]_{i,j=1}^{k}\, \geq \, 0 \end{equation}
  </div>
  <span class="equation_label">2.37</span>
</p>
</div>
<p> for \(k=1,...,n.\) </p>
</li>
  <li><p>The function \(s\rightarrow \overline{\beta }_{s}\) is exponentially convex. </p>
</li>
  <li><p>The function \(s\rightarrow \overline{\beta }_{s}\) is a \(\log \)-convex on \(\mathbb {R^{+}}\) and the following inequality holds for \(0\, {\lt}\, r\, {\lt}\, s\, {\lt}\, t\, {\lt}\, \infty :\) </p>
<div class="equation" id="mohib0123798">
<p>
  <div class="equation_content">
    \begin{equation} \label{mohib0123798} \left(\overline{\beta }_s\right)^{t-r}\,  \le \,  \left(\overline{\beta }_r\right)^{t-s}\, \left(\overline{\beta }_t\right)^{s-r}. \end{equation}
  </div>
  <span class="equation_label">2.38</span>
</p>
</div>
</li>
</ol>

  </div>
</div> <div class="proof_wrapper" id="a0000000050">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in the proof of Theorem <a href="#th2.4">2.2</a>, we use Theorem <a href="#th3.2">1.7</a> instead of Theo-<br />rem <a href="#th1.2">1.3</a>. <div class="proof_wrapper" id="a0000000051">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> <div class="theorem_thmwrapper " id="th3.5">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">2.11</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \( x(\tau ),\, y(\tau ) : [a, b]\rightarrow \mathbb {R}\), \(x(\tau )\) and \(y(\tau )\) are non-negative continuous and increasing, \(G:[a, b]\rightarrow \mathbb {R}\) be a function of bounded variation and </p>
<div class="displaymath" id="a0000000052">
  \begin{equation*}  \overline{\Gamma }_{t}(x(\tau ),\, y(\tau );\,  G(\tau ))\, :=\,  \int _a^b\overline{\varphi }_t(y(\tau ))\, {\rm d}G(\tau ) - \int _a^b\overline{\varphi }_t(x(\tau ))\, {\rm d}G(\tau ) \end{equation*}
</div>
<p> such that conditions <span class="rm">(<a href="#t3.301">1.11</a>)</span> and <span class="rm">(<a href="#t3.302">1.12</a>)</span> are satisfied and \(\overline{\Gamma }_{t}\) is positive. </p>
<p>Then the following statements are valid: </p>
<ol class="enumerate">
  <li><p>For every \(n\in \mathbb {N}\) and \(s_{1},...,s_{n}\in \mathbb {R^{+}}\), the matrix \(\left[\overline{\Gamma }_{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix. Particularly </p>
<div class="equation" id="nasa1023112">
<p>
  <div class="equation_content">
    \begin{equation} \label{nasa1023112} \mathrm{det}\left[\overline{\Gamma }_{\tfrac {s_{i}\, +\, s_{j}}{2}}\right]_{i,j=1}^{k}\, \geq \, 0 \end{equation}
  </div>
  <span class="equation_label">2.39</span>
</p>
</div>
<p> for \(k=1,...,n.\) </p>
</li>
  <li><p>The function \(s\rightarrow \overline{\Gamma }_{s}\) is exponentially convex. </p>
</li>
  <li><p>The function \(s\rightarrow \overline{\Gamma }_{s}\) is a \(\log \)-convex on \(\mathbb {R^{+}}\) and the following inequality holds for \(0\, {\lt}\, r\, {\lt}\, s\, {\lt}\, t\, {\lt}\, \infty :\) </p>
<div class="equation" id="mohib012312">
<p>
  <div class="equation_content">
    \begin{equation} \label{mohib012312} \left(\overline{\Gamma }_s\right)^{t-r}\,  \le \,  \left(\overline{\Gamma }_r\right)^{t-s}\, \left(\overline{\Gamma }_t\right)^{s-r}. \end{equation}
  </div>
  <span class="equation_label">2.40</span>
</p>
</div>
</li>
</ol>

  </div>
</div> </p>
<p><div class="proof_wrapper" id="a0000000053">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div>As in the proof of Theorem <a href="#th2.4">2.2</a>, we use Theorem <a href="#th3.3">1.8</a> instead of Theo-<br />rem <a href="#th1.2">1.3</a>. </p>
<p><div class="theorem_thmwrapper " id="javi">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">2.12</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(F(\tau )\) and \(G(\tau )\) are non-negative continuous and increasing functions defined on an interval \([0, +\infty )\) such that \(F(0) = G(0) = 0\), \(\overline{F}(\tau ) \succ \overline{G}(\tau )\), \(\overline{F}(\tau )\) and \(\overline{G}(\tau )\) are defined in <span class="rm">(<a href="#pb101">1.14</a>)</span>, </p>
<div class="displaymath" id="a0000000054">
  \begin{equation*}  \theta _{t}(\tau ,\, G(\tau );\, F(\tau ))\, :=\, \int _{0}^{\infty }\overline{\varphi }_t(\tau )\, {\rm d}G(\tau ) - \int _{0}^{\infty } \overline{\varphi }_t(\tau )\, {\rm d}F(\tau ), \end{equation*}
</div>
<p> and \(\theta _{t}\) is positive. </p>
<p>Then the following statements are valid: </p>
<ol class="enumerate">
  <li><p>For every \(n\in \mathbb {N}\) and \(s_{1},...,s_{n}\in \mathbb {R^{+}}\), the matrix \(\left[\theta _{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix. Particularly </p>
<div class="equation" id="nasa1023112q">
<p>
  <div class="equation_content">
    \begin{equation} \label{nasa1023112q} \mathrm{det}\left[\theta _{\tfrac {s_{i}\, +\, s_{j}}{2}}\right]_{i,j=1}^{k}\, \geq \, 0 \end{equation}
  </div>
  <span class="equation_label">2.41</span>
</p>
</div>
<p> for \(k=1,...,n.\) </p>
</li>
  <li><p>The function \(s\rightarrow \theta _{s}\) is exponentially convex. </p>
</li>
  <li><p>The function \(s\rightarrow \theta _{s}\) is a \(\log \)-convex on \(\mathbb {R^{+}}\) and the following inequality holds for \(0\, {\lt}\, r\, {\lt}\, s\, {\lt}\, t\, {\lt}\, \infty :\) </p>
<div class="equation" id="mohib012312q">
<p>
  <div class="equation_content">
    \begin{equation} \label{mohib012312q} \theta _s^{t-r}\,  \le \,  \theta _r^{t-s}\, \theta _t^{s-r}. \end{equation}
  </div>
  <span class="equation_label">2.42</span>
</p>
</div>
</li>
</ol>

  </div>
</div> <div class="proof_wrapper" id="a0000000055">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in the proof of Theorem <a href="#th2.4">2.2</a>, we use Theorem <a href="#pb1">1.10</a> instead of Theorem <a href="#th1.2">1.3</a>. <div class="proof_wrapper" id="a0000000056">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in <span class="cite">
	[
	<a href="#mnop" >1</a>
	]
</span>, we define the following means of Cauchy type. </p>
<div class="displaymath" id="jav1">
  \begin{align} \label{jav1} \overline{\theta }_{t, s} \, & =\, \Big(\tfrac {{\theta }_t}{{\theta }_s}\Big)^{\tfrac {1}{t-s}},\, \, \, \, \, \, \, \, \, \, \, t,\, s\in \mathbb {R^{+}},\, \, \, \, s\neq t.\\ \overline{\theta }_{s, s}\, & =\,  \exp \Big( \tfrac {\int _{0}^{\infty } \tau ^{s}\, \log \tau \, {\rm d}G(\tau )- \int _{0}^{\infty } \tau ^{s}\, \log \tau \, {\rm d}F(\tau )}{\int _{0}^{\infty }\tau ^{s}\, {\rm d}G(\tau )- \int _{0}^{\infty }\tau ^{s}\, {\rm d}F(\tau )}-\tfrac {2s-1}{s(s-1)}\Big) , \, \, \,  s\neq 1.\nonumber \\ \overline{\theta }_{1, 1}\, &  =\,  \exp \Big( \tfrac {\int _{0}^{\infty }\tau \, \log ^{2}\tau \,  {\rm d}G(\tau )- \int _{0}^{\infty }\tau \, \log ^{2}\tau \, {\rm d}F(\tau )}{2\Big(\int _{0}^{\infty }\tau \, \log \tau \, {\rm d}G(\tau )- \int _{0}^{\infty }\tau \, \log \tau \, {\rm d}F(\tau )\Big)} - 1\Big).\nonumber \end{align}
</div>
<p> <div class="theorem_thmwrapper " id="thjav">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">2.13</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(t, s, u, v \in \mathbb {R^{+}}\) such that \(t\leq u,\) \(s\leq v\), then the following inequality is valid </p>
<div class="equation" id="jav2">
<p>
  <div class="equation_content">
    \begin{equation} \label{jav2} \overline{\theta }_{t,s}\,  \leq \,  \overline{\theta }_{u, v}. \end{equation}
  </div>
  <span class="equation_label">2.44</span>
</p>
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000057">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> Since \(\theta _t\) is \(\log \)-convex, therefore by \((\ref{jav1})\) we get \((\ref{jav2}).\) <div class="proof_wrapper" id="a0000000058">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> <div class="remark_thmwrapper " id="rmk3.6">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">2.14</span>
  </div>
  <div class="remark_thmcontent">
  <p> As in <span class="cite">
	[
	<a href="#mnop" >1</a>
	]
</span>, we can use Theorem <a href="#th2.4">2.2</a>, Theorem <a href="#th2.6">2.4</a>, Corollary <a href="#cor2.8">2.6</a>, Theorem \(\ref{th3.4}\), Theorem \(\ref{th3.5}\) and Theorem <a href="#javi">2.12</a> to obtain corresponding Cauchy means.<span class="qed">â–¡</span></p>

  </div>
</div> </p>
<h1 id="a0000000059">3 Multiplicative Majorization</h1>
<p> <div class="lemma_thmwrapper " id="lem3.7">
  <div class="lemma_thmheading">
    <span class="lemma_thmcaption">
    Lemma
    </span>
    <span class="lemma_thmlabel">3.1</span>
  </div>
  <div class="lemma_thmcontent">
  <p> Given \(t\in \mathbb {R}\), define the function </p>
<div class="equation" id="l2.101">
<p>
  <div class="equation_content">
    \begin{equation} \label{l2.101} \psi _{t}(x)\, :=\, \left\{  \begin{array}{ll} \tfrac {1}{t^{2}}\,  {\rm e}^{tx}, &  \hbox{t\, $\neq $ 0;} \\ \tfrac {1}{2}\, x^{2}, &  \hbox{t\, = 0,} \end{array} \right. \end{equation}
  </div>
  <span class="equation_label">3.45</span>
</p>
</div>
<p>Then \({\psi }''_{t}(x)= {\rm e}^{tx}\), that is, \(\psi _{t}(x)\) is convex for \(x\in \mathbb {R}.\) </p>

  </div>
</div> <div class="theorem_thmwrapper " id="thj1">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">3.2</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let <span class="rm"><b class="bfseries">x</b></span> and <span class="rm"><b class="bfseries">y</b></span> be two real \(n\)-tuples, \( \textbf{y} \succ \textbf{x} \) , </p>
<div class="displaymath" id="a0000000060">
  \[ {\xi }_t \, =\,  {\xi }_t\textmd{\rm (\textbf{x};\,  \textbf{y})}\,  :=\,  \sum _{i=1}^n\,  {\psi }_t(y_i) - \sum _{i=1}^n\,  {\psi }_t(x_i), \]
</div>
<p> and all \(x_{[i]}\)’s and \(y_{[i]}\)’s are not equal.<br />Then the following statements are valid: </p>
<ol class="enumerate">
  <li><p>For every \(n\in \mathbb {N}\) and \(s_{1},...,s_{n}\in \mathbb {R}\), the matrix \(\left[\xi _{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix. Particularly </p>
<div class="equation" id="nasa1023112qz">
<p>
  <div class="equation_content">
    \begin{equation} \label{nasa1023112qz} \mathrm{det}\left[\xi _{\tfrac {s_{i}\, +\, s_{j}}{2}}\right]_{i,j=1}^{k}\, \geq \, 0 \end{equation}
  </div>
  <span class="equation_label">3.46</span>
</p>
</div>
<p> for \(k=1,...,n.\) </p>
</li>
  <li><p>The function \(s\rightarrow \xi _{s}\) is exponentially convex. </p>
</li>
  <li><p>The function \(s\rightarrow \xi _{s}\) is a \(\log \)-convex on \(\mathbb {R}\) and the following inequality holds for \(-\infty \, {\lt}\, r\, {\lt}\, s\, {\lt}\, t\, {\lt}\, \infty :\) </p>
<div class="equation" id="mohib012312qz">
<p>
  <div class="equation_content">
    \begin{equation} \label{mohib012312qz} \xi _s^{t-r}\,  \le \,  \xi _r^{t-s}\, \xi _t^{s-r}. \end{equation}
  </div>
  <span class="equation_label">3.47</span>
</p>
</div>
</li>
</ol>

  </div>
</div> <div class="proof_wrapper" id="a0000000061">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in the proof of Theorem <a href="#th2.4">2.2</a>, we use \(\psi _{t}\) instead of \(\overline{\varphi }_t\). <div class="proof_wrapper" id="a0000000062">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in <span class="cite">
	[
	<a href="#mnop" >1</a>
	]
</span>, we define the following means of Cauchy type. </p>
<div class="displaymath" id="j102">
  \begin{align} \label{j102} \Theta _{t, s} \, & =\, \Big(\tfrac {{\xi }_t}{{\xi }_s}\Big)^{\tfrac {1}{t-s}},\, \, \, \, \, \, \, \, \, \, \, t,\, s\in \mathbb {R},\, \, \, \, s\neq t.\\ \Theta _{s, s}\, & =\,  \exp \Big( \tfrac {\sum _{i=1}^{n}y_{i}\, {\rm e}^{sy_{i}}- \sum _{i=1}^{n}x_{i}\, {\rm e}^{sx_{i}}}{\sum _{i=1}^{n}{\rm e}^{sy_{i}}-\sum _{i=1}^{n}{\rm e}^{sy_{i}}}-\tfrac {2}{s}\Big) , \, \, \,  s\neq 0.\nonumber \\ \Theta _{0, 0}\, &  =\,  \exp \Big( \tfrac {\sum _{i=1}^{n}y^{3}_{i}-\sum _{i=1}^{n}x^{3}_{i}}{3\big(\sum _{i=1}^{n}y^{2}_{i}-\sum _{i=1}^{n}x^{2}_{i}\big)} \Big).\nonumber \end{align}
</div>
<p> <div class="theorem_thmwrapper " id="thj2">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">3.3</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(t, s, u, v \in \mathbb {R}\) such that \(t\leq u,\) \(s\leq v\), then the following inequality is valid </p>
<div class="equation" id="j201">
<p>
  <div class="equation_content">
    \begin{equation} \label{j201} \Theta _{t,s}\,  \leq \,  \Theta _{u, v}. \end{equation}
  </div>
  <span class="equation_label">3.49</span>
</p>
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000063">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> Since \(\xi _t\) is \(\log \)-convex, therefore by \((\ref{j102})\) we get \((\ref{j201}).\) <div class="proof_wrapper" id="a0000000064">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> <div class="theorem_thmwrapper " id="thj3">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">3.4</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let <span class="rm"><b class="bfseries">x</b></span> and <span class="rm"><b class="bfseries">y</b></span> be two decreasing real \(n\)-tuples, \(\textbf{p}=(p_1,...,p_n)\) be a real n-tuple and let </p>
<div class="displaymath" id="a0000000065">
  \[ \overline{\xi }_t\,  =\,  \overline{\xi }_t\textmd{\rm (\textbf{x}, \textbf{y}; \textbf{p})} \, :=\,  \sum _{i=1}^n\, p_i\, {\psi }_t(y_i) - \sum _{i=1}^n\, p_i\,  {\psi }_t(x_i), \]
</div>
<p> such that conditions <span class="rm">(<a href="#t1.401">1.6</a>)</span> and <span class="rm">(<a href="#t1.402">1.7</a>)</span> are satisfied and \(\overline{\xi }_t\) is positive. </p>
<p>Then the following statements are valid: </p>
<ol class="enumerate">
  <li><p>For every \(n\in \mathbb {N}\) and \(s_{1},...,s_{n}\in \mathbb {R}\), the matrix \(\left[\overline{\xi }_{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix. Particularly </p>
<div class="equation" id="nasa1023112qzl">
<p>
  <div class="equation_content">
    \begin{equation} \label{nasa1023112qzl} \mathrm{det}\left[\overline{\xi }_{\tfrac {s_{i}\, +\, s_{j}}{2}}\right]_{i,j=1}^{k}\, \geq \, 0 \end{equation}
  </div>
  <span class="equation_label">3.50</span>
</p>
</div>
<p> for \(k=1,...,n.\) </p>
</li>
  <li><p>The function \(s\rightarrow \overline{\xi }_{s}\) is exponentially convex. </p>
</li>
  <li><p>The function \(s\rightarrow \overline{\xi }_{s}\) is a \(\log \)-convex on \(\mathbb {R}\) and the following inequality holds for \(-\infty \, {\lt}\, r\, {\lt}\, s\, {\lt}\, t\, {\lt}\, \infty :\) </p>
<div class="equation" id="mohib012312qzl">
<p>
  <div class="equation_content">
    \begin{equation} \label{mohib012312qzl} \left(\overline{\xi }_s\right)^{t-r}\,  \le \,  \left(\overline{\xi }_r\right)^{t-s}\, \left(\overline{\xi }_t\right)^{s-r}. \end{equation}
  </div>
  <span class="equation_label">3.51</span>
</p>
</div>
</li>
</ol>
<p><div class="proof_wrapper" id="a0000000066">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in the proof of Theorem <a href="#th2.4">2.2</a>, we use Theorem <a href="#th1.4">1.5</a> instead of Theorem <a href="#th1.2">1.3</a> and \(\psi _{t}\) instead of \(\overline{\varphi }_t\). <div class="proof_wrapper" id="a0000000067">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> </p>

  </div>
</div> As in <span class="cite">
	[
	<a href="#mnop" >1</a>
	]
</span>, we define the following means of Cauchy type. </p>
<div class="displaymath" id="j302">
  \begin{align} \label{j302} \overline{\Theta }_{t, s} \, & =\, \Big(\tfrac {\overline{\xi }_t}{\overline{\xi }_s}\Big)^{\tfrac {1}{t-s}},\, \, \, \, \, \, \, \, \, \, t,\, s\in \mathbb {R},\, \, \, \, \, s\neq t.\\ \overline{\Theta }_{s, s}\, & =\,  \exp \Big( \tfrac {\sum _{i=1}^{n}p_{i}\, y_{i}\, {\rm e}^{sy_{i}}- \sum _{i=1}^{n}p_{i}\, x_{i}\, {\rm e}{\rm e}^{sx_{i}}}{\sum _{i=1}^{n}p_{i}\, {\rm e}^{sy_{i}}-\sum _{i=1}^{n}p_{i}\, {\rm e}^{sx_{i}}}-\tfrac {2}{s}\Big) , \, \, \,  s\neq 0.\nonumber \\ \overline{\Theta }_{0, 0}\,  & =\,  \exp \Big( \tfrac {\sum _{i=1}^{n}p_{i}\, y^{3}_{i}-\sum _{i=1}^{n}p_{i}\, x^{3}_{i}}{3\big(\sum _{i=1}^{n}p_{i}\, y^{2}_{i}-\sum _{i=1}^{n}p_{i}\, x^{2}_{i}\big)} \Big).\nonumber \end{align}
</div>
<p> <div class="theorem_thmwrapper " id="thj4">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">3.5</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(t, s, u, v \in \mathbb {R}\) such that \(t\leq u,\) \(s\leq v\), then the following inequality is valid </p>
<div class="equation" id="j401">
<p>
  <div class="equation_content">
    \begin{equation} \label{j401} \overline{\Theta }_{t,s}\,  \leq \,  \overline{\Theta }_{u, v}. \end{equation}
  </div>
  <span class="equation_label">3.53</span>
</p>
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000068">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> Since \(\overline{\xi }_t\) is \(\log \)-convex, therefore by \((\ref{j302})\) we get \((\ref{j401}).\) <div class="proof_wrapper" id="a0000000069">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> <div class="corollary_thmwrapper " id="cj1">
  <div class="corollary_thmheading">
    <span class="corollary_thmcaption">
    Corollary
    </span>
    <span class="corollary_thmlabel">3.6</span>
  </div>
  <div class="corollary_thmcontent">
  <p> Let <span class="rm"><b class="bfseries">x</b></span> and <span class="rm"><b class="bfseries">y</b></span> be two positive \(n\)-tuples, \(\textbf{x} \prec _{\times } \textbf{y} \) , </p>
<div class="displaymath" id="a0000000070">
  \begin{equation*}  \Omega _t\textmd{\rm (log \textbf{x};\, log \textbf{y})}\,  =\,  \xi _t\textmd{\rm (\textbf{x};\,  \textbf{y})}\,  :=\, \left\{  \begin{array}{ll} \tfrac {1}{t^{2}}\, \Big(\sum _{i=1}^n y_{i}^{t} - \sum _{i=1}^n x_{i}^{t}\Big) , &  \hbox{t\, $\neq $ 0;} \\ \tfrac {1}{2}\, \Big(\sum _{i=1}^n {\rm log}^{2}y_{i} - \sum _{i=1}^n {\rm log}^{2}x_{i}\Big), &  \hbox{t\, = 0,} \end{array} \right. \end{equation*}
</div>
<p> and all \(x_{[i]}\)’s and \(y_{[i]}\)’s are not equal.<br />Then the following statements are valid: </p>
<ol class="enumerate">
  <li><p>For every \(n\in \mathbb {N}\) and \(s_{1},...,s_{n}\in \mathbb {R}\), the matrix \(\left[\Omega _{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix. Particularly </p>
<div class="equation" id="nasa1023112qzl23">
<p>
  <div class="equation_content">
    \begin{equation} \label{nasa1023112qzl23} \mathrm{det}\left[\Omega _{\tfrac {s_{i}\, +\, s_{j}}{2}}\right]_{i,j=1}^{k}\, \geq \, 0 \end{equation}
  </div>
  <span class="equation_label">3.54</span>
</p>
</div>
<p> for \(k=1,...,n.\) </p>
</li>
  <li><p>The function \(s\rightarrow \Omega _{s}\) is exponentially convex. </p>
</li>
  <li><p>The function \(s\rightarrow \Omega _{s}\) is a \(\log \)-convex on \(\mathbb {R}\) and the following inequality holds for \(-\infty \, {\lt}\, r\, {\lt}\, s\, {\lt}\, t\, {\lt}\, \infty :\) </p>
<div class="equation" id="mohib012312qzl23">
<p>
  <div class="equation_content">
    \begin{equation} \label{mohib012312qzl23} \Omega _s^{t-r}\,  \le \,  \Omega _r^{t-s}\, \Omega _t^{s-r}. \end{equation}
  </div>
  <span class="equation_label">3.55</span>
</p>
</div>
</li>
</ol>

  </div>
</div> <div class="proof_wrapper" id="a0000000071">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in the proof of Theorem <a href="#th2.4">2.2</a>, we use Theorem <a href="#th1.2">1.3</a> for \(\rm \textbf{x}=log{\rm \textbf{x}}\) and \(\rm \textbf{y}=log{\rm \textbf{y}}\) and using \(\psi _{t}\) instead of \(\overline{\varphi }_t\). <div class="proof_wrapper" id="a0000000072">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in <span class="cite">
	[
	<a href="#mnop" >1</a>
	]
</span>, we define the following means of Cauchy type. </p>
<div class="displaymath" id="cj102">
  \begin{align} \label{cj102} \Psi _{t, s} \, & =\, \Big(\tfrac {\Omega _t}{\Omega _s}\Big)^{\tfrac {1}{t-s}},\, \, \, \, \, \, \, \, \, \, \, t,\, s\in \mathbb {R},\, \, \, \, \, s\neq t.\\ \Psi _{s, s}\, & =\,  \exp \Big( \tfrac {\sum _{i=1}^{n}y_{i}^{s}\, \log y_{i} -\sum _{i=1}^{n}x_{i}^{s}\, \log x_{i}}{\sum _{i=1}^{n} y_{i}^{s}-\sum _{i=1}^{n}x_{i}^{s}}-\tfrac {2}{s}\Big) , \, \, \, \,  s\neq 0.\nonumber \\ \Psi _{0, 0} \, & =\,  \exp \Big( \tfrac {\sum _{i=1}^{n}\log ^{3}y_{i}-\sum _{i=1}^{n}\log ^{3}x_{i}}{3\big(\sum _{i=1}^{n}\log ^{2}y_{i}-\sum _{i=1}^{n}\log ^{2}x_{i}\big)} \Big).\nonumber \end{align}
</div>
<p> <div class="corollary_thmwrapper " id="cj2">
  <div class="corollary_thmheading">
    <span class="corollary_thmcaption">
    Corollary
    </span>
    <span class="corollary_thmlabel">3.7</span>
  </div>
  <div class="corollary_thmcontent">
  <p> Let \(t, s, u, v \in \mathbb {R}\) such that \(t\leq u,\) \(s\leq v\), then the following inequality is valid </p>
<div class="equation" id="cj201">
<p>
  <div class="equation_content">
    \begin{equation} \label{cj201} \Psi _{t,s}\,  \leq \,  \Psi _{u, v}. \end{equation}
  </div>
  <span class="equation_label">3.57</span>
</p>
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000073">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> Since \(\Omega _t\) is \(\log \)-convex, therefore by \((\ref{cj102})\) we get \((\ref{cj201}).\) <div class="proof_wrapper" id="a0000000074">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> <div class="corollary_thmwrapper " id="cj3">
  <div class="corollary_thmheading">
    <span class="corollary_thmcaption">
    Corollary
    </span>
    <span class="corollary_thmlabel">3.8</span>
  </div>
  <div class="corollary_thmcontent">
  <p> Let <span class="rm"><b class="bfseries">x</b></span> and <span class="rm"><b class="bfseries">y</b></span> be two positive decreasing \(n\)-tuples, \(\textbf{p} = (p_{1}, ..., p_{n} )\) be a real n-tuple and let </p>
<div class="displaymath" id="a0000000075">
  \begin{eqnarray*}  \lefteqn{\overline{\Omega }_t\textmd{\rm (\textbf{x},\, \textbf{y};\, \textbf{p})}=} \\ & &  =\, \overline{\xi }_t\textmd{\rm (log \textbf{x},\,  log \textbf{y};\, \, \textbf{p})}\,  :=\, \left\{  \begin{array}{ll} \tfrac {1}{t^{2}}\, \Big(\sum _{i=1}^n p_{i}\, y_{i}^{t} - \sum _{i=1}^n p_{i}\, x_{i}^{t}\Big) , &  \hbox{t\, $\neq $ 0;} \\ \tfrac {1}{2}\, \Big(\sum _{i=1}^n p_{i}\, {\rm log}^{2}y_{i} - \sum _{i=1}^n p_{i}\, {\rm log}^{2}x_{i}\Big), &  \hbox{t\, = 0,} \end{array} \right. \end{eqnarray*}
</div>
<p> such that conditions <span class="rm">(<a href="#t1.401">1.6</a>)</span> and <span class="rm">(<a href="#t1.402">1.7</a>)</span> are satisfied and \(\overline{\Omega }_t\) is positive. </p>
<p>Then the following statements are valid: </p>
<ol class="enumerate">
  <li><p>For every \(n\in \mathbb {N}\) and \(s_{1},...,s_{n}\in \mathbb {R}\), the matrix \(\left[\overline{\Omega }_{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix. Particularly </p>
<div class="equation" id="nasa1023112qzl234">
<p>
  <div class="equation_content">
    \begin{equation} \label{nasa1023112qzl234} \mathrm{det}\left[\overline{\Omega }_{\tfrac {s_{i}\, +\, s_{j}}{2}}\right]_{i,j=1}^{k}\, \geq \, 0 \end{equation}
  </div>
  <span class="equation_label">3.58</span>
</p>
</div>
<p> for \(k=1,...,n.\) </p>
</li>
  <li><p>The function \(s\rightarrow \overline{\Omega }_{s}\) is exponentially convex. </p>
</li>
  <li><p>The function \(s\rightarrow \overline{\Omega }_{s}\) is a \(\log \)-convex on \(\mathbb {R}\) and the following inequality holds for \(-\infty \, {\lt}\, r\, {\lt}\, s\, {\lt}\, t\, {\lt}\, \infty :\) </p>
<div class="equation" id="mohib012312qzl234">
<p>
  <div class="equation_content">
    \begin{equation} \label{mohib012312qzl234} \left(\overline{\Omega }_s\right)^{t-r}\,  \le \,  \left(\overline{\Omega }_r\right)^{t-s}\, \left(\overline{\Omega }_t\right)^{s-r}. \end{equation}
  </div>
  <span class="equation_label">3.59</span>
</p>
</div>
</li>
</ol>

  </div>
</div> <div class="proof_wrapper" id="a0000000076">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in the proof of Theorem <a href="#th2.4">2.2</a>, we use Theorem <a href="#th1.4">1.5</a> for \(\rm \textbf{x}=log\rm \textbf{x}\) and \(\rm \textbf{y}=log\rm \textbf{y}\) and using \(\psi _{t}\) instead of \(\overline{\varphi }_t\). <div class="proof_wrapper" id="a0000000077">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in <span class="cite">
	[
	<a href="#mnop" >1</a>
	]
</span>, we define the following means of Cauchy type. </p>
<div class="displaymath" id="cj302">
  \begin{align} \label{cj302} \overline{\Psi }_{t, s} \, & =\, \Big(\tfrac {\overline{\Omega }_t}{\overline{\Omega }_s}\Big)^{\tfrac {1}{t-s}},\, \, \, \, \, \, \, \, \, t,\, s\in \mathbb {R},\, \, \, \, s\neq t.\\ \overline{\Psi }_{s, s}\, & =\,  \exp \Big( \tfrac {\sum _{i=1}^{n}p_{i}\, y_{i}^{s}\, \log y_{i} -\sum _{i=1}^{n}p_{i}\, x_{i}^{s}\, \log x_{i}}{\sum _{i=1}^{n} p_{i}\, y_{i}^{s}-\sum _{i=1}^{n}p_{i}\, x_{i}^{s}}-\tfrac {2}{s}\Big) , \, \, \,  s\neq 0.\nonumber \\ \overline{\Psi }_{0, 0} & = \exp \Big( \tfrac {\sum _{i=1}^{n}p_{i}\, \log ^{3}y_{i}-\sum _{i=1}^{n} p_{i}\, \log ^{3}x_{i}}{3\big(\sum _{i=1}^{n}p_{i}\, \log ^{2}y_{i}-\sum _{i=1}^{n}p_{i}\, \log ^{2}x_{i}\big)} \Big).\nonumber \end{align}
</div>
<p> <div class="corollary_thmwrapper " id="cj4">
  <div class="corollary_thmheading">
    <span class="corollary_thmcaption">
    Corollary
    </span>
    <span class="corollary_thmlabel">3.9</span>
  </div>
  <div class="corollary_thmcontent">
  <p> Let \(t,\, s,\, u,\, v \in \mathbb {R}\) such that \(t\leq u,\) \(s\leq v\), then the following inequality is valid </p>
<div class="equation" id="cj401">
<p>
  <div class="equation_content">
    \begin{equation} \label{cj401} \overline{\Psi }_{t,s} \, \leq \,  \overline{\Psi }_{u, v}. \end{equation}
  </div>
  <span class="equation_label">3.61</span>
</p>
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000078">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> Since \(\overline{\Omega }_t\) is \(\log \)-convex, therefore by \((\ref{cj302})\) we get \((\ref{cj401}).\) <div class="proof_wrapper" id="a0000000079">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> </p>
<p><div class="theorem_thmwrapper " id="jt1">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">3.10</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(x(\tau )\) and \(y(\tau ) \) be two real-valued functions defined on an interval \([a, b]\), decreasing in \([a, b]\), \( y(\tau ) \succ x(\tau ) \) and </p>
<div class="displaymath" id="a0000000080">
  \begin{equation*}  \Phi _{t}(x(\tau );\,  y(\tau ))\, :=\,  \int _a^b\psi _t(y(\tau ))\, {\rm d}\tau - \int _a^b\psi _t(x(\tau ))\, {\rm d}\tau , \end{equation*}
</div>
<p> and \(\Phi _{t}\) is positive. </p>
<p>Then the following statements are valid: </p>
<ol class="enumerate">
  <li><p>For every \(n\in \mathbb {N}\) and \(s_{1},...,s_{n}\in \mathbb {R}\), the matrix \(\left[\Phi _{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix. Particularly </p>
<div class="equation" id="nasa1023112qzl23443">
<p>
  <div class="equation_content">
    \begin{equation} \label{nasa1023112qzl23443} \mathrm{det}\left[\Phi _{\tfrac {s_{i}\, +\, s_{j}}{2}}\right]_{i,j=1}^{k}\, \geq \, 0 \end{equation}
  </div>
  <span class="equation_label">3.62</span>
</p>
</div>
<p> for \(k=1,...,n.\) </p>
</li>
  <li><p>The function \(s\rightarrow \Phi _{s}\) is exponentially convex. </p>
</li>
  <li><p>The function \(s\rightarrow \Phi _{s}\) is a \(\log \)-convex on \(\mathbb {R}\) and the following inequality holds for \(-\infty \, {\lt}\, r\, {\lt}\, s\, {\lt}\, t\, {\lt}\, \infty :\) </p>
<div class="equation" id="mohib012312qzl23443">
<p>
  <div class="equation_content">
    \begin{equation} \label{mohib012312qzl23443} \Phi _s^{t-r}\,  \le \,  \Phi _r^{t-s}\, \Phi _t^{s-r}. \end{equation}
  </div>
  <span class="equation_label">3.63</span>
</p>
</div>
</li>
</ol>

  </div>
</div> <div class="proof_wrapper" id="a0000000081">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in the proof of Theorem <a href="#th2.4">2.2</a>, we use Theorem <a href="#th3.2">1.7</a> instead of Theorem <a href="#th1.2">1.3</a> and \(\psi _{t}\) instead of \(\overline{\varphi }_t\). <div class="proof_wrapper" id="a0000000082">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> <div class="theorem_thmwrapper " id="jt2">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">3.11</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(x(\tau ),\, y(\tau ) : [a, b]\rightarrow \mathbb {R}\), \(x(\tau )\) and \(y(\tau )\) are continuous and increasing, \(G:[a, b]\rightarrow \mathbb {R}\) be a function of bounded variation and </p>
<div class="displaymath" id="a0000000083">
  \begin{equation*}  \overline{\Phi }_{t}(x(\tau ),\,  y(\tau );\,  G(\tau ))\, :=\,  \int _a^b\psi _t(y(\tau ))\, {\rm d}G(\tau ) - \int _a^b\psi _t(x(\tau ))\, {\rm d}G(\tau ) \end{equation*}
</div>
<p> such that conditions <span class="rm">(<a href="#t3.301">1.11</a>)</span> and <span class="rm">(<a href="#t3.302">1.12</a>)</span> are satisfied and \(\overline{\Phi }_{t}\) is positive. </p>
<p>Then the following statements are valid: </p>
<ol class="enumerate">
  <li><p>For every \(n\in \mathbb {N}\) and \(s_{1},...,s_{n}\in \mathbb {R}\), the matrix \(\left[\overline{\Phi }_{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix. Particularly </p>
<div class="equation" id="nasa1023112qzl23443">
<p>
  <div class="equation_content">
    \begin{equation} \label{nasa1023112qzl23443} \mathrm{det}\left[\overline{\Phi }_{\tfrac {s_{i}\, +\, s_{j}}{2}}\right]_{i,j=1}^{k}\, \geq \, 0 \end{equation}
  </div>
  <span class="equation_label">3.64</span>
</p>
</div>
<p> for \(k=1,...,n.\) </p>
</li>
  <li><p>The function \(s\rightarrow \overline{\Phi }_{s}\) is exponentially convex. </p>
</li>
  <li><p>The function \(s\rightarrow \overline{\Phi }_{s}\) is a \(\log \)-convex on \(\mathbb {R}\) and the following inequality holds for \(-\infty \, {\lt}\, r\, {\lt}\, s\, {\lt}\, t\, {\lt}\, \infty :\) </p>
<div class="equation" id="mohib012312qzl23443">
<p>
  <div class="equation_content">
    \begin{equation} \label{mohib012312qzl23443} \left(\overline{\Phi }_s\right)^{t-r}\,  \le \,  \left(\overline{\Phi }_r\right)^{t-s}\, \left(\overline{\Phi }_t\right)^{s-r}. \end{equation}
  </div>
  <span class="equation_label">3.65</span>
</p>
</div>
</li>
</ol>

  </div>
</div> <div class="proof_wrapper" id="a0000000084">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in the proof of Theorem <a href="#th2.4">2.2</a>, we use Theorem <a href="#th3.3">1.8</a> instead of Theorem <a href="#th1.2">1.3</a> and \(\psi _{t}\) instead of \(\overline{\varphi }_t\). <div class="proof_wrapper" id="a0000000085">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> <div class="theorem_thmwrapper " id="wasi">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">3.12</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(F(\tau )\) and \(G(\tau )\) are real continuous and increasing functions defined on an interval \([0, +\infty )\) such that \(F(0) = G(0) = 0\), \(\overline{F}(\tau ) \succ \overline{G}(\tau )\), \(\overline{F}(\tau )\) and \(\overline{G}(\tau )\) are defined in (<a href="#pb101">1.14</a>), </p>
<div class="displaymath" id="a0000000086">
  \begin{equation*}  \vartheta _{t}(\tau ,\, G(\tau );\, F(\tau ))\, :=\, \int _{0}^{\infty }\psi _t(\tau )\, {\rm d}G(\tau ) - \int _{0}^{\infty }\psi _t(\tau )\, {\rm d}F(\tau ), \end{equation*}
</div>
<p> and \(\vartheta _{t}\) is positive. </p>
<p>Then the following statements are valid: </p>
<ol class="enumerate">
  <li><p>For every \(n\in \mathbb {N}\) and \(s_{1},...,s_{n}\in \mathbb {R}\), the matrix \(\left[\vartheta _{\tfrac {s_{i}+s_{j}}{2}}\right]_{i,j=1}^{n}\) is a positive semi-definite matrix. Particularly </p>
<div class="equation" id="nasa1023112qzl23443">
<p>
  <div class="equation_content">
    \begin{equation} \label{nasa1023112qzl23443} \mathrm{det}\left[\vartheta _{\tfrac {s_{i}\, +\, s_{j}}{2}}\right]_{i,j=1}^{k}\, \geq \, 0 \end{equation}
  </div>
  <span class="equation_label">3.66</span>
</p>
</div>
<p> for \(k=1,...,n.\) </p>
</li>
  <li><p>The function \(s\rightarrow \vartheta _{s}\) is exponentially convex. </p>
</li>
  <li><p>The function \(s\rightarrow \vartheta _{s}\) is a \(\log \)-convex on \(\mathbb {R}\) and the following inequality holds for \(-\infty \, {\lt}\, r\, {\lt}\, s\, {\lt}\, t\, {\lt}\, \infty :\) </p>
<div class="equation" id="mohib012312qzl23443">
<p>
  <div class="equation_content">
    \begin{equation} \label{mohib012312qzl23443} \vartheta _s^{t-r}\,  \le \,  \vartheta _r^{t-s}\, \vartheta _t^{s-r}. \end{equation}
  </div>
  <span class="equation_label">3.67</span>
</p>
</div>
</li>
</ol>

  </div>
</div> <div class="proof_wrapper" id="a0000000087">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in the proof of Theorem <a href="#th2.4">2.2</a>, we use Theorem <a href="#pb1">1.10</a> instead of Theorem <a href="#th1.2">1.3</a> and \(\psi _{t}\) instead of \(\overline{\varphi }_{t}\). <div class="proof_wrapper" id="a0000000088">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> As in <span class="cite">
	[
	<a href="#mnop" >1</a>
	]
</span>, we define the following means of Cauchy type. </p>
<div class="displaymath" id="was1">
  \begin{align} \label{was1} \overline{\vartheta }_{t, s} \, & =\, \Big(\tfrac {{\vartheta }_t}{{\vartheta }_s}\Big)^{\tfrac {1}{t-s}},\, \, \, \, \, \, \, \, \, \, \, t,\, s\in \mathbb {R},\, \, \, \, s\neq t.\\ \overline{\vartheta }_{s, s}\, & =\,  \exp \Big( \tfrac {\int _{0}^{\infty } \tau \, {\rm e}^{s\tau }\, {\rm d}G(\tau )- \int _{0}^{\infty } \tau \, {\rm e}^{s\tau }\, {\rm d}F(\tau )}{\int _{0}^{\infty }{\rm e}^{s\tau }\, {\rm d}G(\tau )- \int _{0}^{\infty }{\rm e}^{s\tau }\, {\rm d}F(\tau )}-\tfrac {2}{s}\Big) , \, \, \,  s\neq 0.\nonumber \\ \overline{\vartheta }_{0, 0}\,  & =\,  \exp \Big( \tfrac {\int _{0}^{\infty }\tau ^{3}\, {\rm d}G(\tau )- \int _{0}^{\infty }\tau ^{3}\, {\rm d}F(\tau )}{3\big(\int _{0}^{\infty }\tau ^{2}\, {\rm d}G(\tau )- \int _{0}^{\infty }\tau ^{2}\, {\rm d}F(\tau )\big)}\Big).\nonumber \end{align}
</div>
<p> <div class="theorem_thmwrapper " id="thwas">
  <div class="theorem_thmheading">
    <span class="theorem_thmcaption">
    Theorem
    </span>
    <span class="theorem_thmlabel">3.13</span>
  </div>
  <div class="theorem_thmcontent">
  <p> Let \(t, s, u, v \in \mathbb {R}\) such that \(t\leq u,\) \(s\leq v\), then the following </p>
<div class="equation" id="was2">
<p>
  <div class="equation_content">
    \begin{equation} \label{was2} \overline{\vartheta }_{t,s}\,  \leq \,  \overline{\vartheta }_{u, v}. \end{equation}
  </div>
  <span class="equation_label">3.69</span>
</p>
</div>

  </div>
</div> <div class="proof_wrapper" id="a0000000089">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> Since \(\vartheta _t\) is \(\log \)-convex, therefore by \((\ref{was1})\) we get \((\ref{was2}).\) <div class="proof_wrapper" id="a0000000090">
  <div class="proof_heading">
    <span class="proof_caption">
    Proof
    </span>
    <span class="expand-proof">â–¼</span>
  </div>
  <div class="proof_content">
  
  </div>
</div> <div class="remark_thmwrapper " id="jrmk">
  <div class="remark_thmheading">
    <span class="remark_thmcaption">
    Remark
    </span>
    <span class="remark_thmlabel">3.14</span>
  </div>
  <div class="remark_thmcontent">
  <p> As in <span class="cite">
	[
	<a href="#mnop" >1</a>
	]
</span>, we can use Theorem <a href="#thj1">3.2</a>, Theorem <a href="#thj3">3.4</a>, Corollary <a href="#cj1">3.6</a>, Corollary <a href="#cj3">3.8</a>, Theorem \(\ref{jt1}\), Theorem \(\ref{jt2}\) and Theorem \(\ref{wasi}\) to obtain corresponding Cauchy means.<span class="qed">â–¡</span></p>

  </div>
</div> </p>
<p><div class="acknowledgement_thmwrapper " id="a0000000091">
  <div class="acknowledgement_thmheading">
    <span class="acknowledgement_thmcaption">
    Acknowledgement
    </span>
  </div>
  <div class="acknowledgement_thmcontent">
  <p>This research work is funded by Higher Education Commission Pakistan. The research of the second author was supported by the Croatian Ministry of Science, Education and Sports under the Research Grants 117-1170889-0888. </p>

  </div>
</div> </p>
<p><small class="footnotesize">  </small></p>
<div class="bibliography">
<h1>Bibliography</h1>
<dl class="bibliography">
  <dt><a name="mnop">1</a></dt>
  <dd><p><i class="sc">Anwar, M., Latif, N.</i> and <i class="sc">Pečarić, J.</i>, <i class="it">Positive semi-definite matrices, exponential convexity for Majorization and related Cauchy means</i>, J. Inequal. Appl., vol. 2010, art. id 728251, 19 pp., 2010. doi:10.1155/2010/728251. </p>
</dd>
  <dt><a name="mnpj">2</a></dt>
  <dd><p><i class="sc">Anwar, M., Jak\(\bar{s}\)etić, J., Pečarić, J.</i> and <i class="sc">Rehman, U. A.</i>, <i class="it">Exponential convexity, positive semi-definite matrices and fundamental inequalities</i>, J. Math. Inequal., accepted, Article ID jmi-0376, 19 pages, 2009. </p>
</dd>
  <dt><a name="bp">3</a></dt>
  <dd><p><i class="sc">Boland, P. J.</i> and <i class="sc">Proschan, F.</i>, <i class="it">An integral inequality with applications to order statistics. Reliability and Quality Control</i>, A. P. Basu, ed., North Holland, Amsterdam, pp.&#160;107–116, 1986. </p>
</dd>
  <dt><a name="lf">4</a></dt>
  <dd><p><i class="sc">Fuchs, L.</i>, <i class="it">A new proof of an inequality of Hardy-Littlewood-Polya</i>, Mat. Tidsskr, B-53-54, 1947. </p>
</dd>
  <dt><a name="z">5</a></dt>
  <dd><p><i class="sc">Kadelburg, Z., Dukić, D., Lukić, M.</i> and <i class="sc">Matić, I.</i>, <i class="it">Inequalities of Karamata. Schur and Muirhead, and some applications</i>, The Teaching of Mathematics, <b class="bf">VIII</b>, 1, pp.&#160;31–45, 2005. </p>
</dd>
  <dt><a name="mo">6</a></dt>
  <dd><p><i class="sc">Marshall, W. A.</i> and <i class="sc">Olkin, I.</i>, <i class="it">Theory of Majorization and its Applications</i>, Academic Press, New York, 1979. </p>
</dd>
  <dt><a name="py">7</a></dt>
  <dd><p><i class="sc">Palomar, P. D.</i> and <i class="sc">Jiang, Y.</i>, <i class="it">MIMO Transceiver Design via Majorization Theory, Foundation and Trends\(^{\circledR }\) in Communications and Information Theory</i>, published, sold and distributed by: now publishers Inc. PO Box 1024, Hanovor, MA 02339, USA. </p>
</dd>
  <dt><a name="ja1">8</a></dt>
  <dd><p><i class="sc">Pečarić, J.</i> and <i class="sc">Rehman, U. A.</i>, <i class="it">On Logrithmic Convexity for Power Sums and related results</i>, J. Inequal. Appl., vol. <b class="bfseries">2008</b>, Article ID 389410, 9 pages, 2008, <span class="tt">doi:10.1155/2008/389410</span>. </p>
<p><br /></p>
</dd>
  <dt><a name="ja">9</a></dt>
  <dd><p><i class="sc">Pečarić, J.</i> and <i class="sc">Rehman, U. A.</i>, <i class="it">On Logarithmic Convexity for Power Sums and related results II</i>, J. Inequal. Appl., vol. <b class="bfseries">2008</b>, Article ID 305623, 12 pages, 2008, <span class="tt">doi:10.1155/2008/305623</span>. </p>
</dd>
  <dt><a name="jpb">10</a></dt>
  <dd><p><i class="sc">Pečarić, J., Proschan, F.</i> and <i class="sc">Tong, L. Y.</i>, <i class="it">Convex functions, Partial Orderings and Statistical Applications</i>, Academic Press, New York, 1992. </p>
</dd>
  <dt><a name="jpp">11</a></dt>
  <dd><p><i class="sc">Pečarić, J.</i>, <i class="it">On some inequalities for functions with nondecreasing increments</i>, J. Math. Anal. Appl., <b class="bfseries">98</b>, pp.&#160;188–197, 1984. </p>
</dd>
</dl>


</div>
</div> <!--main-text -->
</div> <!-- content-wrapper -->
</div> <!-- content -->
</div> <!-- wrapper -->

<nav class="prev_up_next">
</nav>

<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/jquery.min.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/plastex.js"></script>
<script type="text/javascript" src="/var/www/clients/client1/web1/web/files/jnaat-files/journals/1/articles/js/svgxuse.js"></script>
</body>
</html>