SOME SUFFICIENT CONDITIONS FOR THE CONVERGENCE OF THE CASCADE ALGORITHM AND FOR THE CONTINUITY OF THE SCALING FUNCTION

. We deﬁne a class of matrices which includes, under some natural assumptions, the matrices m (0), m (1) and T 2 N − 1 , which are the key matrices of the wavelets theory. The matrices of this class have the property that the eigenvalues of a product matrix are products of their eigenvalues. This property is used in establishing some suﬃcient conditions for the convergence of the cascade algorithm and some suﬃcient conditions for the continuity of the scaling function. We generalize here the particular results obtained by us in a previous paper.


INTRODUCTION
The dilation equation plays an important role in wavelets theory.A dilation equation is a functional equation having the form Any nonzero solution φ of such an equation is called a scaling function.The scaling functions lead to wavelets: if φ is a scaling function, then the associated "mother wavelet" is defined as The dilation equation is linear, so any multiple of a solution is a solution.It is convenient to normalize so that

This relation implies h
so in this paper we will suppose this condition satisfied.Also (see [3]), if this condition is satisfied, then the dilation equation ( 1) has a unique and compactly supported solution φ (t).This solution may be a distribution.
For the coefficients h k some other conditions are required.In the wavelets literature these are the so-called A p conditions.Definition 1.Let p ∈ N * .We say that the coefficients h k of the dilation equation (1) for m = 0, . . ., p − 1, with the convention 0 0 = 1.
A way to solve the dilation equation is the cascade algorithm described by with φ 0 (t) usually taken as the box function The scaling function φ is the limit In Section 2 we study some classes of matrices that will help us in studying the convergence of the cascade algorithm (Section 3) and then in studying the continuity of the scaling function (Section 4).

A CLASS OF MATRICES
In this section we present the classes C N of matrices of M N (R), N ∈ N * , which are closed with respect to the operation of multiplication and have the property that the eigenvalues of a product matrix are products of their eigenvalues.These classes include the matrices T 2N −1 , m(0) and m(1) given in (15) , (3) and (4), which are the key matrices in studying the dilation equation.
For a given N ∈ N we consider the matrix S = (s ij ) 1≤i,j≤N with A simple calculation shows that its inverse is Then we define the class C N as the set of the matrices M ∈ M N (R) with the property that the matrix SM S −1 is lower triangular.The set C N has the following properties: The eigenvalues of the product matrix M 1 M 2 are products of the eigenvalues of M 1 and M 2 .
Proof.It is immediately that the matrices M and SM S −1 have the same eigenvalues.We denote by λ i k , k = 1, . . ., N , the eigenvalues of M i , such that Then, , whence both conclusions 1 and 2 follow.

THE CONTINUITY OF THE SCALING FUNCTION
As shown in [3,Ch. 7], the study of the continuity of the scaling function involves the matrices m (0) and m (1) defined by In the following we will prove that m (0) ∈ C N and m (1) ∈ C N .First we have to establish some preliminary results.
Evaluating R 1 2 , p , we obtain Consider now the polynomial A simple induction on m shows that Thus, the polynomial H gets the expression In conclusion we have 0 = H (1) = 2 p R 1 2 , p , and further, for t = 1, . . ., p − 1, In conclusion the polynomials P and Q coincide, fact which proves the relation (6).Analogously we can prove the relations (7), ( 8) and ( 9).Now we prove the following theorem.

Theorem 1. If the coefficients h k satisfy the condition
Proof.We have to show that Sm (0) S −1 = (α ij ) 1≤i,j≤N and Sm (1) S −1 = (β ij ) 1≤i,j≤N are lower triangular.A simple calculation shows that In order to have lower triangular matrices we have to prove that α ij = β ij = 0, for i = 1, . . ., N − 1, j = i + 1, . . ., N, equalities which reduce to First we prove these relations for fixed i, 1 ≤ i < N and j = i + 1 (Step 1).
Then we use the induction to prove them for arbitrary j, j ≤ N (Step 2).
Step 1.Let 1 ≤ i < N. We prove that α i,i+1 = β i,i+1 = 0, which means Consider two cases: Case 1.Even i: i = 2p.In this case With the convention 2p q = 0 whenever q > 2p, we can replace k =2p−m with k =p .Then, using formulas (6) and (7) we further obtain Analogously we can prove that Case 2. Odd i: i = 2p + 1.In this case, the same arguments can be used in order to prove that α i,i+1 = β i,i+1 = 0. Also, at this step we prove that β 1,j = 0 for j > 2. Indeed, evaluating β 1,j we obtain Step 2. We use the induction method, so let us suppose that and prove that α i,j+1 = 0 and β i,j+1 = 0.A simple calculation show that β i,j+1 = β ij + α ij = 0 and α i,j+1 = β ij + β i−1,j = 0. Thus, the induction method allows us to state that α ij = β ij = 0 for 1 ≤ i < N , i < j < N.This means that the matrices Sm (0) S −1 and Sm (1) S −1 are lower triangular.Let us mention that the condition A i is enough to have zeros on the row i, above the diagonal.Now, let us turn back to the continuity.The columns of m (0) and m (1) add to 1 if we suppose the condition A 1 satisfied.If e = [1 . . .1] , then e m (0) = e and e m (1) = e.The dilation equation in the vector form, on the interval [0, 1), is: , where (written in the base 2) decides whether the recursion use m (0) or m (1):

.) .
A nearby point T begins with the same digits.At some step, the digits differ.If T = 0.t 1 t 2 T 3 T 4 . . ., then To prove the continuity on [0, 1) means to show that Φ (t) is close to Φ (T ) when t and T share more digits t 1 , t 2 , . . ., t k .Also, this should happen outside the interval [0, 1) (for more details, see [3]).Actually one works with the matrices m N −1 (0) and m N −1 (1) of order N − 1.These matrices are restrictions of the linear operators m (0) resp.m (1) to the vector spaces perpendicular to the vector e = [1 . . .1] .They may be determined in the following way: Let us define If m (0) (resp.m (1)) is the block matrix then, multiplying U −1 BU by blocks, we get The matrix B N −1 will be the above mentioned restriction m N −1 (0) of m (0) (resp.m N −1 (1) of m (1)).It is easy to see that the eigenvalues of m N −1 (t 1 ) , t 1 ∈ {0, 1}, are the same as the eigenvalues of m (t 1 ) , after removing the eigenvalue λ = 1.
The following proposition show the relation between the eigenvalues of the products m (t 1 ) m (t 2 ) , t i ∈ {0, 1}, and the eigenvalues of the products m N −1 (t 1 ) m N −1 (t 2 ) .Proposition 1.The eigenvalues of the product m (t 1 ) m (t 2 ) , t i ∈ {0, 1} are λ = 1 and the eigenvalues of the product m N −1 (t 1 ) m N −1 (t 2 ) .
Proof.The matrices U −1 m (t i ) U and m (t i ) have the same eigenvalues.Denoting whence the conclusion.
The relation between the eigenvalues of the matrices m N −1 (0) and m N −1 (1) and the continuity of the scaling function is stated in the following theorem (see [1, p. 1042]).
Here ρ (A 1 , A 2 ) denotes the "generalized spectral radius" of the matrices A 1 and A 2 , Combining the result of this theorem with the result stated in the previous proposition, we can prove the following theorem.