FISHER’S INFORMATION MEASURES AND TRUNCATED NORMAL DISTRIBUTIONS

. The aim of this paper is to give some properties for the Fisher information measure when a random variable X follows a truncated probability distribution. A truncated probability distribution can be regarded as a conditional probability distribution, in the sense that if X has an unrestricted distribution with the probability density function f ( x ) , then f a ↔ b ( x ) is the probability density function which governs the behavior of X , subject to the condition that X is known to lie in [ a, b ]. MSC

Let X be a continuous random variable on the probability space (Ω, K, P ) and having the probability density function f (x; θ) which depends on a real parameter θ ∈ D θ ⊆ R, D θ being the parameter space.Then we are confronted not with one distribution on probability, but with a family of distributions, which will be denoted by the symbol {f (x; θ) : θ ∈ D θ }.Any member of this family of probability density functions will by denoted by the symbol f (x; θ), θ ∈ D θ .
Let S n (X) = (X 1 , X 2 , . . ., X n ) denote a random sample of size n from a distribution that has a probability density function which is one member (but which member is not known) of the family {f (x; θ) : θ ∈ D θ } of the probability density functions.That is, our random sample arises from a distribution that has the probability density function f (x; θ), θ ∈ D θ .Our problem is that of defining a statistic θ = t(X 1 , X 2 , . . ., X n ) so that, if x 1 , x 2 , . . ., x n are the observed experimental values of X 1 , X 2 , . . ., X n , then the number t(x 1 , x 2 , . . ., x n ) will be referred to us an estimate of θ and is usually written as θ0 = t(x 1 , x 2 , . . ., x n ).To evaluate estimators we need some definitions.
Definition 2. Any statistic whose mathematical expectation is equal to a parameter θ is called an unbiased estimator.Otherwise, the statistic is said to be biased.Definition 3. Any statistic that converges stochastically to a parameter θ is called a consistent estimator of the parameter θ, that is, if we have for all ε > 0.
Definition 4.An estimator θ = t(X 1 , X 2 , . . ., X n ) of θ is said to be a minimum variance unbiased estimator of θ if it has the following two properties: In the following we suppose that the parameter θ is unknown and we estimate a specified function of θ, g(θ), with the help of statistic θ = t(X 1 , X 2 , . . ., X n ) which is based on a random sample of size n, S n (X) = (X 1 , X 2 , . . ., X n ), where X i are independent and identically distributed random variable with the probability density function f (x; θ), θ ∈ D θ .The joint probability density function of X 1 , X 2 , . . ., X n , regarded as a function of θ, has the following form where L(x 1 , x 2 , . . ., x n ) is called the likelihood function of the random sample S n (X) = (X 1 , X 2 , . . ., X n ).
A well known means of measuring the quality of the statistic is to use the inequality of Cramér-Rao which states that, under certain regularity conditions for f (x; θ) (more precisely, it requires the possibility of differentiating under the integral sign) any unbiased estimator of g(θ) has variance which satisfies the following inequality where and The quantity I X (θ) is known as Fisher's information measure and it measures the information about g(θ) which is contained in an observation of X.Also, the quantity are independent and identically distributed random variables with density f (x; θ), θ ∈ D θ .An unbiased estimator of g(θ) that achieves this minimum from ( 1) is known as an efficient estimator.
Moreover, the right-hand side of the Cramér-Rao inequality is, for each θ ∈ D θ , a lower bound for the variance of an unbiased estimator.The unbiased estimator for which this bound is attained are said to be efficient.

THE TRUNCATED NORMAL DISTRIBUTION
Let X have a normal distribution with probability density function where the parameters m and σ must satisfy the conditions m ∈ R, σ > 0.
The mean and the variance of X are which shows that the parameters m and σ have their usual significance as the mean and the standard deviation of the distribution.If the random variable X obeys the normal probability law with mean m and standard deviation σ, then we shall use the symbols X : N (m, σ 2 ).
Definition 5. We say that the random variable X has a normal distribution truncated to the left at X = a and to the right at X = b if its probability density function f is of the form where the constant k(a, b) is determined from the conditions (4) If we use ( 4) we obtain the following value 2 dt is the standard normal distribution function corresponding to the standard normal random variable Z = X−m σ .The probability density function of the random variable Z has the form Moreover, we have the following relation as the relation for any real numbers a and b (finite or infinite, in which a < b).
Finally, the following properties of the function Φ(z): play a vital role in our subsequent work.
From the relation of the definition (2.2), then when we have in view the relation (5), we obtain the following truncated normal probability density function where Remark 1.A truncated probability distribution can be regarded as a conditional probability distribution in the sense that if X has an unrestricted distribution with probability density function f (x) then f a↔b (x), as defined above, is the probability density function which governs the behavior of X subject to the condition that X is known to lie in [a, b].
Remark 2. It is easy to see from (6) that: where f →b (x; m, σ 2 ) is the probability density function when X has a normal distribution truncated to the right at X = b, f a← (x; m, σ 2 ) is the probability density function when X has a normal distribution truncated to the left at X = a, and f (x; m, σ 2 ) is the probability density function when X has an ordinary normal distribution.
Theorem 6.Let X a↔b be a random variable with a normal distribution truncated to the left at X = a and to the right at X = b.
where m = E(X), σ 2 = Var(X), X is an ordinary normal random variable and A has the form (8).
Proof.Making use of the definition of E(X a↔b ) we obtain ( 10) By making the change of variables and (10) can be rewritten as follows where, as is easily seen, ( 12) and From these last relations we obtain just the form (9) for the expected value of X a↔b .
Corollary 7.For the random variables X a← , X →b and X we have Theorem 8. Let X a↔b be a random variable with a normal distribution truncated to the left at X = a and to the right at X = b.Then , where A is a real number given in (8).
Proof.Making use of the definition of E(X 2 a↔b ) we have By making the change of variable (11) this last relation can be rewritten as follows , where I 1 and I 2 are just the integrals (12) and ( 13) and for the integral I 3 we obtain if we have in view the formula for integration by parts.From (15), when we have in view these three values of integrals I 1 , I 2 and I 3 , one obtains just the form (14) for the expected value E(X 2 a↔b ).
Corollary 9.If X a↔b is a random variable with a normal distribution truncated to the left at X = a and to the right at X = b, then 2 , if we take into account the forms (9) and ( 14) of the moments E(X a↔b ) and E(X 2 a↔b ).
Corollary 10.For the random variables X a← , X →b and X we have , and Var(X a↔b ) = σ 2 .

FISHER'S INFORMATION MEASURES FOR THE TRUNCATED NORMAL DISTRIBUTIONS
Let X be a continuous random variable which has an ordinary normal distribution with probability density function (2), that is, Theorem 11.If the random variable X a↔b has a normal distribution truncated to the left at X = a and to the right at X = b, that is, its probability distribution is of the form then the Fisher information measure corresponding to X a↔b has the following form Proof.Let X a↔b be a continuous random variable and its probability density function of the form (16), where m ∈ D m = R is an unknown real parameter and σ 2 ∈ R + is a known parameter.
For a such continuous random variable X a↔b the Fisher information measure has the form Using ( 17) and ( 16), we obtain it is easy to see from (18) that for the quantity I X a↔b (m) we have the following form + (b−m)f (b;m,σ 2 )−(a−m)f (a;m,σ 2 ) This completes the proof of the theorem.
Corollary 12.For the random variables X a← , X →b and X we have