Let \(H\) be an inner product space, \(X\) a complete subspace of \(H\), and \(Y\) a closed subspace of \(X\). The main result of this Note is the following converse of the Reduction Principle: if \(x_{0}\in X,\ h\in H\backslash X\) and \(y_{0}\in Y\) is the element of best approximation of both \(x_{0}\) and \(h\), \((x_{0}-h,x_{0}-y_{0})=0\) and \(codim_{X}Y=1\), then \(x_{0}\) is the element of best approximation of \(h\) in \(X\).
Authors
Costică Mustăţa
“Tiberiu Popoviciu” Institute of Numerical Analysis, Romanian Academy, Romania
Keywords
Inner product spaces; the Reduction Principle; best approximation.
Paper coordinates
C. Mustăţa, On the converses of the reduction principle in inner product spaces, Studia Univ. Babeş-Bolyai, Mathematica, 51 (2006) no. 3, 97-104.
[1] Cheney, W., Analysis for Applied Mathematics, Springer-Verlag, New York-BerlinHeidelberg, 2001.
[2] Deutsch, F., Best Approximation in Inner-Product Space, Springer-Verlag, New YorkBerlin-Heidelberg, 2001.
[3] Laurent, P.J., Approximation et Optimisation, Herman, Paris, 1972.
ON THE CONVERSES OF THE REDUCTION PRINCIPLE IN INNER PRODUCT SPACES
COSTICĂ MUSTĂŢADedicated to Professor Ştefan Cobzaş at his 60^("th ")60^{\text {th }} anniversary
Abstract
Let HH be an inner product space, XX a complete subspace of HH, and YY a closed subspace of XX. The main result of this Note is the following converse of the Reduction Principle: if x_(0)in X,h in H\\Xx_{0} \in X, h \in H \backslash X and y_(0)in Yy_{0} \in Y is the element of best approximation of both x_(0)x_{0} and h,(x_(0)-h,x_(0)-y_(0))=0h,\left(x_{0}-h, x_{0}-y_{0}\right)=0 and codim_(X)Y=1\operatorname{codim}_{X} Y=1, then x_(0)x_{0} is the element of best approximation of hh in XX.
1. Introduction
Let HH be an inner product space, with real inner product (*,*)(\cdot, \cdot) and the norm ||h||=sqrt((h,h)),h in H\|h\|=\sqrt{(h, h)}, h \in H. For a subset MM of HH and h in Hh \in H, the distance of hh to MM is defined by
d(x,M)=i n f{||h-m||:m in M}.d(x, M)=\inf \{\|h-m\|: m \in M\} .
The set MM is called proximinal if for every h in Hh \in H there exists m_(0)in Mm_{0} \in M such that
||h-m_(0)||=d(h,M).\left\|h-m_{0}\right\|=d(h, M) .
The set
P_(M)(h):={m in M:||h-m||=d(h,M)},quad h in HP_{M}(h):=\{m \in M:\|h-m\|=d(h, M)\}, \quad h \in H
is called the set of best approximation elements of hh by elements in MM, and the application P_(M):H rarr2^(M)P_{M}: H \rightarrow 2^{M} is called the metric projection of HH on MM.
If cardP_(M)(h)=1\operatorname{card} P_{M}(h)=1 for every h in Hh \in H, then the set MM is called a Chebyshevian set in HH ([2], p.35).
The existence and the uniqueness of best approximation elements are treated in Chapter 3 of [2]: every complete convex set in an inner product space is a Chebyshev set ([2], Th.3.4).
Two elements u,v in Hu, v \in H are called orthogonal if (u,v)=0(u, v)=0. The cosinus of the angle between the u,v in H\\{0}u, v \in H \backslash\{0\} is defined by the formula
cos widehat(u,v)=((u,v))/(||u||*||v||)\cos \widehat{u, v}=\frac{(u, v)}{\|u\| \cdot\|v\|}
Concerning the characterization of best approximation elements, the following result holds ([2], Th.4.9):
Let MM be a subspace of H,h in HH, h \in H and m_(0)in Mm_{0} \in M. Then m_(0)=P_(M)(h)m_{0}=P_{M}(h) iff
(h-m_(0),m)=0\left(h-m_{0}, m\right)=0
for all m in Mm \in M.
The geometric interpretation of this characterization result is that the element h-P_(M)(h)h-P_{M}(h) is orthogonal to each element of MM. This is the reason why P_(M)(h)P_{M}(h) is often called the orthogonal projection of hh on MM.
The following result appears in [2], p. 80 under the name "the Reduction Principle":
Let KK be a convex subset of the inner product space HH and let MM be any Chebyshev subspace of HH that contains KK. Then
a) P_(K)(P_(M)(h))=P_(K)(h)=P_(M)(P_(K)(h)),h in H;P_{K}\left(P_{M}(h)\right)=P_{K}(h)=P_{M}\left(P_{K}(h)\right), h \in H ;
b) d(h,K)^(2)=d(h,M)^(2)+d(P_(M)(h),K)^(2)d(h, K)^{2}=d(h, M)^{2}+d\left(P_{M}(h), K\right)^{2},
for every h in Hh \in H.
Obviously, if KK is a closed and convex subset of a complete subspace MM of the inner product space HH, the properties a) and b) are also fulfilled (see Th.4.1 in [2], and Th. 2.2.6 in [3]).
ON THE CONVERSES OF THE REDUCTION PRINCIPLE IN INNER PRODUCT SPACES
2. Results
From now on, we consider the following particular case of the Reduction Principle:
Theorem 1. Let HH be an inner product space, XX a complete subspace of HH, and YY a closed subspace of XX. Then
a') P_(Y)(h)=P_(Y)(P_(X)(h))=P_(X)(P_(Y)(h)),h in H;P_{Y}(h)=P_{Y}\left(P_{X}(h)\right)=P_{X}\left(P_{Y}(h)\right), h \in H ;
b') d(h,Y)^(2)=d(h,X)^(2)+d(P_(X)(h),Y)^(2)d(h, Y)^{2}=d(h, X)^{2}+d\left(P_{X}(h), Y\right)^{2}, for every h in Hh \in H.
The proof of Theorem 1 is an immediate consequence of the characterization result ([2], Th.4.9) and the Pythagorean Law (see e.g. [1], Th.1, p.70).
A generalization of Theorem 1 is:
Theorem 2. Let HH be an inner product space and M_(1),M_(2),dots,M_(n)(n >= 2)M_{1}, M_{2}, \ldots, M_{n}(n \geq 2) be subspaces of HH with the following properties:
M_(1)M_{1} is complete;
M_(i),i=2,3,dots,nM_{i}, i=2,3, \ldots, n are closed;
M_(1)supM_(2)sup cdots supM_(n)M_{1} \supset M_{2} \supset \cdots \supset M_{n}.
a) For every h in Hh \in H the following equalities hold
b) LetP_(M_(1))(h)=m_(1),P_(M_(k))P_(M_(k-1))(h)=m_(k),k=2,3,dots,n\operatorname{Let} P_{M_{1}}(h)=m_{1}, P_{M_{k}} P_{M_{k-1}}(h)=m_{k}, k=2,3, \ldots, n.
Using the characterization result ([2], Th.4.9) it follows that the element P_(M_(n))P_(M_(n-1))dotsP_(M_(1))(h)P_{M_{n}} P_{M_{n-1}} \ldots P_{M_{1}}(h) is the orthogonal projection of hh on M_(n)M_{n}.
On the other hand, (h-P_(M_(n))(h),y)=0\left(h-P_{M_{n}}(h), y\right)=0 for every y inM_(n)y \in M_{n}. Consequently
Remark. Obviously, Theorem 1 is also valid if HH is a Hilbert space and X,YX, Y are closed subspace of HH, with Y sub XY \subset X. Also, Theorem 2 is valid if HH is a Hilbert space and M_(1)supM_(2)sup cdots supM_(n)M_{1} \supset M_{2} \supset \cdots \supset M_{n} are closed subspaces of HH.
A converse of the Reduction Principle is given in [3], Th.2.2.6:
Let HH be an inner product space, XX a complete subspace of HH and KK a closed and convex subset of XX. If xx is the orthogonal projection of h!in Xh \notin X on X,mX, m is the metric projection of hh on KK, then mm is the metric projection of xx on KK.
A first converse of Theorem 1 is:
Theorem 3. Let HH be an inner product space, XX a complete subspace of HH, and YY a closed subspace of XX. Let h in H\\Xh \in H \backslash X and let P_(X)(h)P_{X}(h) and P_(Y)(h)P_{Y}(h) be the orthogonal projections of hh on XX, respectively on YY. Then P_(Y)(h)P_{Y}(h) is the orthogonal projection of P_(X)(h)P_{X}(h) on YY.
Proof. Indeed, by hypothesis it follows:
{:[(h-P_(X)(h),x)=0","AA x in X","],[(h-P_(Y)(h),y)=0","AA y in Y","]:}\begin{aligned}
& \left(h-P_{X}(h), x\right)=0, \forall x \in X, \\
& \left(h-P_{Y}(h), y\right)=0, \forall y \in Y,
\end{aligned}
ON THE CONVERSES OF THE REDUCTION PRINCIPLE IN INNER PRODUCT SPACES
so that for every y in Yy \in Y one has:
It follows that P_(Y)(h)P_{Y}(h) is the orthogonal projection of P_(X)(h)P_{X}(h) on YY.
A second converse of Theorem 1 is:
Theorem 4. Let HH be an inner product space, XX a complete subspace of HH, and YY a closed subspace of XX with codim_(X)Y=1\operatorname{codim}_{X} Y=1. Let x_(0)in X\\Yx_{0} \in X \backslash Y and P_(Y)(x_(0))P_{Y}\left(x_{0}\right) be the orthogonal projection of x_(0)x_{0} on YY. If h in H\\X,P_(Y)(h)=P_(Y)(x_(0))h \in H \backslash X, P_{Y}(h)=P_{Y}\left(x_{0}\right) and (h-x_(0),x_(0)-:}{:P_(Y)(x_(0)))=0\left(h-x_{0}, x_{0}-\right. \left.P_{Y}\left(x_{0}\right)\right)=0, then P_(Y)(h)=x_(0)P_{Y}(h)=x_{0}.
Proof. If the equality (h-x_(0),x)=0\left(h-x_{0}, x\right)=0 is fulfilled for every x in Xx \in X, then P_(X)(h)=x_(0)P_{X}(h)=x_{0}, i.e. the conclusion of the theorem.
It follows that h-x_(0)h-x_{0} is orthogonal to YY.
Because, by hypothesis, (h-x_(0),x_(0)-P_(Y)(x_(0)))=0\left(h-x_{0}, x_{0}-P_{Y}\left(x_{0}\right)\right)=0 it follows that (h-x_(0),u)=0\left(h-x_{0}, u\right)=0 for every u in span{x_(0)-P_(Y)(x_(0))}u \in \operatorname{span}\left\{x_{0}-P_{Y}\left(x_{0}\right)\right\}. Because x_(0)-P_(Y)(x_(0))x_{0}-P_{Y}\left(x_{0}\right) is orthogonal to YY and YY is a closed subspace of the Hilbert space XX, it follows that X=span{x_(0)-P_(Y)(x_(0))}o+YX=\operatorname{span}\left\{x_{0}-P_{Y}\left(x_{0}\right)\right\} \oplus Y, i.e. XX is the direct sum of the subspaces span{x_(0)-P_(Y)(x_(0))}\operatorname{span}\left\{x_{0}-P_{Y}\left(x_{0}\right)\right\} and YY (see [2], Th.5.9 p. 77 and [1], Th.4, p.65). Consequently (h-x_(0),x)=0\left(h-x_{0}, x\right)=0 for every x in Xx \in X.
Remark. The condition codim_(X)Y=1\operatorname{codim}_{X} Y=1 in Theorem 4 is essential. Indeed, let {e_(1),e_(2),e_(3)}\left\{e_{1}, e_{2}, e_{3}\right\} be the orthonormal basis of the Hilbert space R^(3),X=span{e_(1),e_(2)}\mathbb{R}^{3}, X=\operatorname{span}\left\{e_{1}, e_{2}\right\}, Y=span{0}Y=\operatorname{span}\{0\} and h=3e_(1)+e_(2)+5e_(3)h=3 e_{1}+e_{2}+5 e_{3}. Let x_(0)=e_(1)+2e_(2)x_{0}=e_{1}+2 e_{2}. Then P_(Y)(x_(0))=0P_{Y}\left(x_{0}\right)=0 and P_(Y)(h)=3e_(1)+e_(2),P_(Y)(h)=0P_{Y}(h)=3 e_{1}+e_{2}, P_{Y}(h)=0. The conditions P_(Y)(x_(0))=P_(Y)(h)P_{Y}\left(x_{0}\right)=P_{Y}(h) and (h-x_(0),x_(0)-:}{:P_(Y)(x_(0)))=(2e_(1)-e_(2),e_(1)+2e_(2))=0\left(h-x_{0}, x_{0}-\right. \left.P_{Y}\left(x_{0}\right)\right)=\left(2 e_{1}-e_{2}, e_{1}+2 e_{2}\right)=0 are fulfilled, but P_(X)(h)=3e_(1)+e_(2)!=x_(0)=e_(1)+2e_(2)P_{X}(h)=3 e_{1}+e_{2} \neq x_{0}=e_{1}+2 e_{2}. Observe that codim_(X)Y=2\operatorname{codim}_{X} Y=2.
Examples. 1^(@)1^{\circ} Let l_(2)=l_(2)(N)l_{2}=l_{2}(\mathbb{N}) be the space of all sequences x=(x(i))x=(x(i)) of real numbers such that sum_(i=1)^(oo)x^(2)(i) < oo\sum_{i=1}^{\infty} x^{2}(i)<\infty. It is known that l_(2)l_{2} is a Hilbert space with respect to the inner product (x,y)=sum_(i=1)^(oo)x(i)y(i)(x, y)=\sum_{i=1}^{\infty} x(i) y(i) and the norm ||x||=(sum_(i=1)^(oo)x^(2)(i))^(1//2)\|x\|= \left(\sum_{i=1}^{\infty} x^{2}(i)\right)^{1 / 2}. Let {e_(1),e_(2),dots}\left\{e_{1}, e_{2}, \ldots\right\} be the canonical basis of l_(2)l_{2}. The closed subspace X= bar(span{e_(2n-1)∣n=1,2,3,dots})X=\overline{\operatorname{span}\left\{e_{2 n-1} \mid n=1,2,3, \ldots\right\}} is Chebyshevian in l_(2)l_{2} and the orthogonal projection of h=(h(1),h(2),dots)inl_(2)h=(h(1), h(2), \ldots) \in l_{2} is P_(X)(h)=sum_(i=1)^(oo)h(2i-1)e_(2i-1)P_{X}(h)=\sum_{i=1}^{\infty} h(2 i-1) e_{2 i-1}, because h-P_(X)(h)=sum_(j=1)^(oo)h(2j)e_(2j)h-P_{X}(h)=\sum_{j=1}^{\infty} h(2 j) e_{2 j} is orthogonal on XX.
Let Y=span{e_(1),e_(3)+e_(5)}Y=\operatorname{span}\left\{e_{1}, e_{3}+e_{5}\right\}. Then YY is a Chebyshevian subspace of l_(2)l_{2} (and of X)X) and
is orthogonal to YY, so y_(0)=P_(Y)(x)y_{0}=P_{Y}(x). 2^(@)2^{\circ} Let l_(2)(4)=span{e_(1),e_(2),e_(3),e_(4)}l_{2}(4)=\operatorname{span}\left\{e_{1}, e_{2}, e_{3}, e_{4}\right\} where e_(i)(j)=delta_(ij),i,j=1,2,3,4(:}e_{i}(j)=\delta_{i j}, i, j=1,2,3,4\left(\right. see {:1^(@))\left.1^{\circ}\right), and X=span{e_(1),e_(2),e_(3)},Y=span{e_(1),e_(2)}X=\operatorname{span}\left\{e_{1}, e_{2}, e_{3}\right\}, Y=\operatorname{span}\left\{e_{1}, e_{2}\right\} and Z=span{e_(1)}Z=\operatorname{span}\left\{e_{1}\right\}.
If x_(0)=2e_(1)+e_(2)+2e_(3)x_{0}=2 e_{1}+e_{2}+2 e_{3}, then P_(Y)(x_(0))=2e_(1)+e_(2)P_{Y}\left(x_{0}\right)=2 e_{1}+e_{2}. For alpha,beta inR\alpha, \beta \in \mathbb{R} let h=2e_(1)+e_(2)+alphae_(3)+betae_(4)h= 2 e_{1}+e_{2}+\alpha e_{3}+\beta e_{4}. Then P_(Y)(h)=2e_(1)+e_(2)P_{Y}(h)=2 e_{1}+e_{2} and (h-x_(0),x_(0)-P_(Y)(x_(0)))=2(alpha-2)=0\left(h-x_{0}, x_{0}-P_{Y}\left(x_{0}\right)\right)=2(\alpha-2)=0 implies alpha=2\alpha=2.
ON THE CONVERSES OF THE REDUCTION PRINCIPLE IN INNER PRODUCT SPACES
Every element h=2e_(1)+e_(2)+2e_(3)+betae_(4),beta inRh=2 e_{1}+e_{2}+2 e_{3}+\beta e_{4}, \beta \in \mathbb{R} has as orthogonal projection on XX
Observe that codim_(X)Y=1\operatorname{codim}_{X} Y=1.
Consider now the orthogonal projections on Z(codim_(X)Z=2)Z\left(\operatorname{codim}_{X} Z=2\right). Then P_(Z)(x_(0))=2e_(1),P_(Z)(h)=2e_(1)P_{Z}\left(x_{0}\right)=2 e_{1}, P_{Z}(h)=2 e_{1} and (h-x_(0),x_(0)-P_(Z)(x_(0)))=alpha+beta-3=0\left(h-x_{0}, x_{0}-P_{Z}\left(x_{0}\right)\right)=\alpha+\beta-3=0 implies alpha+beta=3\alpha+\beta=3.
Choosing the element h=2e_(1)+2e_(2)+e_(3)+2e_(4)h=2 e_{1}+2 e_{2}+e_{3}+2 e_{4} one obtains
3^(@)3^{\circ} Let L_(2)[-1,1]L_{2}[-1,1] be the Hilbert space of all (Lebesgue) measurable realvalued functions on [-1,1][-1,1] with the property that int_(-1)^(1)h^(2)(t)dt < oo\int_{-1}^{1} h^{2}(t) d t<\infty. The inner product on L_(2)[-1,1]L_{2}[-1,1] is (x,y)=int_(-1)^(1)x(t)y(t)dt(x, y)=\int_{-1}^{1} x(t) y(t) d t and the associated norm is ||h||=(int_(-1)^(1)h^(2)(t)dt)^(1//2)\|h\|=\left(\int_{-1}^{1} h^{2}(t) d t\right)^{1 / 2}. Consider also the Legendre polynomials (see [2])
for n >= 0n \geq 0.
The set {p_(0),p_(1),dots,p_(n)},n >= 0\left\{p_{0}, p_{1}, \ldots, p_{n}\right\}, n \geq 0 is orthonormal in L_(2)[-1,1]L_{2}[-1,1]. Consider the following subspaces of L_(2)[-1,1]L_{2}[-1,1] :
Obviously, Z sub Y sub X subL_(2)[-1,1]Z \subset Y \subset X \subset L_{2}[-1,1] and P_(Z)(h)=P_(Z)P_(Y)P_(X)(h)P_{Z}(h)=P_{Z} P_{Y} P_{X}(h).
Let x_(0)=p_(0)+2p_(1)+2p_(2)+p_(3)x_{0}=p_{0}+2 p_{1}+2 p_{2}+p_{3}. If h inL_(2)[-1,1]\\Xh \in L_{2}[-1,1] \backslash X then P_(Y)(h)=P_(Y)(x_(0))P_{Y}(h)=P_{Y}\left(x_{0}\right) iff (h,p_(0))=1,(h,p_(1))=2\left(h, p_{0}\right)=1,\left(h, p_{1}\right)=2 and (h,p_(2))=2\left(h, p_{2}\right)=2. The condition (x_(0)-P_(Y)(x_(0)),h-x_(0))=0\left(x_{0}-P_{Y}\left(x_{0}\right), h-x_{0}\right)=0 implies (p_(3),h-x_(0))=0\left(p_{3}, h-x_{0}\right)=0 and, consequently, (p_(3),h)=(p_(3),x_(0))=1\left(p_{3}, h\right)=\left(p_{3}, x_{0}\right)=1. It follows P_(X)(h)=x_(0)P_{X}(h)=x_{0}. Observe that codim_(X)Y=1\operatorname{codim}_{X} Y=1.
Now P_(Z)(x_(0))=p_(0)+2p_(1)P_{Z}\left(x_{0}\right)=p_{0}+2 p_{1} and P_(Z)(h)=P_(Z)(x_(0))P_{Z}(h)=P_{Z}\left(x_{0}\right) implies (h,p_(0))=1,(h,p_(1))=2\left(h, p_{0}\right)=1,\left(h, p_{1}\right)=2. The condition (x_(0)-P_(Z)(x_(0)),h-x_(0))=0\left(x_{0}-P_{Z}\left(x_{0}\right), h-x_{0}\right)=0 implies
Let h_(1)=p_(0)+2p_(1)+p_(2)+3p_(3)+p_(4)h_{1}=p_{0}+2 p_{1}+p_{2}+3 p_{3}+p_{4} and h_(2)=p_(0)+2p_(1)+(1)/(2)p_(2)+4p_(3)+p_(4)h_{2}=p_{0}+2 p_{1}+\frac{1}{2} p_{2}+4 p_{3}+p_{4}. Then P_(Z)(h_(i))=P_(Z)(x_(0)),i=1,2P_{Z}\left(h_{i}\right)=P_{Z}\left(x_{0}\right), i=1,2 and (x_(0)-P_(Z)(x_(0)),h_(i)-x_(0))=0,i=1,2\left(x_{0}-P_{Z}\left(x_{0}\right), h_{i}-x_{0}\right)=0, i=1,2, but P_(X)(h_(1))!=P_(X)(h_(2))!=x_(0)P_{X}\left(h_{1}\right) \neq P_{X}\left(h_{2}\right) \neq x_{0}. Observe that codim_(X)Z=2\operatorname{codim}_{X} Z=2.
References
[1] Cheney, W., Analysis for Applied Mathematics, Springer-Verlag, New York-BerlinHeidelberg, 2001.
[2] Deutsch, F., Best Approximation in Inner-Product Space, Springer-Verlag, New York-Berlin-Heidelberg, 2001.
[3] Laurent, P.J., Approximation et Optimisation, Herman, Paris, 1972.
"T. Popoviciu" Institute of Numerical Analysis, O.P.1, C.P. 68 , Cluj-Napoca, RomANIA
E-mail address: cmustata@ictp.acad.ro
Received by the editors: 21.03.2006.
2000 Mathematics Subject Classification. 41A65, 46C05.
Key words and phrases. Inner product spaces, the Reduction Principle, best approximation.