LEAST SQUARES PROBLEM FOR LINEAR INEQUALITIES WITH BOUNDS ON THE VARIABLES

. In [2] S.P. Han proposed a method for ﬁnding a least-squares solution for systems of linear inequalities. Additional information about the real world problem may take the form of additional inequalities. A common case is when the variables are restricted to lie in certain prescribed intervals. A method for this very important case is the subject of this paper.


INTRODUCTION
Consider the system of linear inequalities (1) Ax ≤ b, where A ∈ M m×n , b ∈ R m , x ∈ R n .When the system is inconsistent, we are interested in vectors that satisfy the system (1) in a least-squares (LS) sense, that is, vectors , where • stands for the Euclidean norm in R m and (Ax − b) + is the m-vector whose i-th component is max {(Ax − b) i , 0}.The linear least squares problem of equalities has always enjoyed a lot of attention from researchers (see [1]).
We are concerned with LS problem (2) of inequalities when the components of x are within meaningful intervals: (3)

., n.
A vector which verifies the constraints (3) will be called a feasible point.
In paper [3] we have worked out an algorithm for solving convex programs subject to box constraints.In this paper we have applied the algorithm to solve the problem (2) with (3).
The method proposed proceeds by solving a finite number of smaller unconstrained subproblems, such a subproblem having the form of least squares minimization over a linear subspace.Is an adaptive method for determining the constraints that are active at optimal solution x, i.e. the components of x that are exactly at one of their bounds.The method considered develops on two levels.At each iteration of a higher level (it will be called major iteration) it is decided which variables are fixed at one of its bounds and which one are free, that is, strictly between its bounds.
At lower level, a least squares subproblem is solved at a time, only in the subspace of the free variables while keeping the fixed variables unchanged.The iterations of this subproblem will be called minor iterations.The starting point for the minor iterations is a feasible point given by the current major iteration.
Consider the following problem: (4) min f (x) where f is the convex differentiable function For any feasible point x, we use N l (x) and N u (x) to denote the sets of indices for which the corresponding components of the point x are fixed at one of its bounds, that is: If x is an optimal point for problem (4) then it is also optimal for "restraint" problem: min f (x) If we can determine the sets N l ( x) and N u ( x), we can solve the "restraint" problem with constraints taken as equalities: Consider the subproblem (7) Assume that F p is the complement of N p = N p l ∪ N p u , that is, the set of indices of free variables at major iteration p.We also assume without loss of generality, that F p = {1, ..., q}.Then we have the following partitions: where the subvector z contains the first q components of x (the free part), while z contains the last n − q components (the fixed part).The matrix C ∈ M m×q contains the first columns of A.
The subproblem (7) then reduces to finding a vector z (p) ∈ R q solving (8) min where d = b − Dz.From practical point of view, ( 8) is an unconstrained problem with q free variables.
Then, returning to the n-dimensional space, the optimal point for (7) is z .

SOLVING THE SUBPROBLEM IN THE FREE VARIABLE
For solving the least squares subproblem (8), we have chosen to use the S.-P.Han algorithm [2] with modification that the iterations of the algorithm (minor iterations) are stopped as soon as one or more of the free variables violate their bounds.The starting point for minor iterations is given by the current major iteration.The Han's method has a finite convergence property.The function is differentiable and convex and its gradient is: The necessary and sufficient condition for an q-vector z to be a least squares solution of the system Condition (10) may be called the normal equation of the system (9) because it is analogous to the normal equation for the equality case.
In [2] it is pointed that a least squares solution of (9) exists.The system (9) may be also written as , ..., m} where c T i forms the i-th row of the matrix C and d i the i-th component of d.For z ∈ R q we use I(z) to denote the set of indices i ∈ M of active and violated inequalities at z.That is, For I = I(z) we denote C I the submatrix of C that consists of rows whose indices are in I, while the subvector d I is similarly defined.
Algorithm 1 (least squares solution of linear inequalities).

Typical step: Generate the direction
), where I = I(z (j) ) and C + I is the pseudo-inverse of C I .Determine the stepsize λ-the smallest minimizer of function: In [2] it is pointed that the direction w is a descendent direction for the function g at z (j) , more precisely: ∇g(z (j) ) = −C T I C I w.If w = 0, then from (10) z (j) is a least squares solution of (9).Else the process is repeated until C T (Cz (j) − d) + ≤ ε or one or more of the components of z (j) violate their bounds.
The pseudo-inverse can be constructed by a singular value decomposition.In any case, to compute w it is not necessary to compute the pseudo-inverse explicitly.

THE METHOD
Let y p be optimal for problem (7).It follows that there are Lagrange mul- i.e. ∇f (y p ) is expressed as unique linear combination of unit vectors e i = (0, ..., 1 i , ..., 0).

If conditions (12)
are satisfied, then y p is optimal solution for problem (4) (see [3]).If L = min i∈N p α i λ i with then condition (13) may be written as L ≥ 0. From ( 6) and ( 11), Lagrange multipliers are calculated as it follows: The method is briefly the following: suppose the starting feasible point x 1 is given; then we solve the subproblem ( 14) and let y 1 be optimal point for problem (14).We distinguish between two possibilities.
If L < 0 then a subspace of a large dimension in considered, by releasing a constraint (according to Proposition 2 and 3, from [3], a fixed variable leaves a bound and becomes free) and a smaller value of f will be obtained.
The second case: y 1 is not feasible for problem (2)-(3) (there exists i / ∈ N (x 1 ) such that y 1 i < l i or y 1 i > u i ), then let x 2 be the feasible point closest to y 1 on the line segment between x 1 and y 1 : where the scalar τ ∈ (0, 1) in determined such that .., n and x 2 to be closest y 1 .At least one additional component r / ∈ N (x 1 ) of x 2 hits one of their bounds.Then we solve the subproblem: Algorithm 2 (suboptimization algorithm for LS problem with interval constraints).
1. Initialization: u the sets of indices of the components of x 1 that are exactly at one of their bounds.

Typical
Step: Solve the subproblem on the free variables keeping the fixed variables Determine where z contains the fixed part (15).Case 1: y p is a feasible point Determine L = min i∈N p α i λ i with where λ i are Lagrange multipliers associated with the bound constraints: If L ≥ 0, STOP; y p is optimal for problem (2)-(3).
If L < 0, let i 0 ∈ N p such that L = α i 0 λ i 0 .Define Go to 2 with p + 1 replacing p. Case 2: y p is not a feasible point.Determine x p+1 the feasible point closest to y p on the line between x p and y p .Define