On some interpolatory iterative methods for the second degree polynomial operators (II)

Abstract

In this paper we apply some iterative methods obtained by inverse interpolation, in order to solve some specific classes of equations: the Ricatti equation, a Fredholm type equation, and the eigenvalue problem for a class of linear operators.

We obtain some semilocal convergence results, showing the r-convergence orders of the iterates.

Authors

Emil Cătinaş
(Tiberiu Popoviciu Institute of Numerical Analysis)

Ion Păvăloiu
(Tiberiu Popoviciu Institute of Numerical Analysis)

Keywords

inverse interpolation iterative methods; Ricatti equation; Fredholm type equation; eigenvalue problem; semilocal convergence results; r-convergence order.

Cite this paper as:

E. Cătinaş, I. Păvăloiu, On some interpolatory iterative methods for the second degree polynomial operators (II), Rev. Anal. Numér. Théor. Approx., 28 (1999) no. 2, pp. 133-143.

PDF

Scanned paper: on the journal website.

Latex version of the paper (soon).

About this paper

Publisher Name

Editions de l’Academie Roumaine.

Print ISSN

1222-9024

Online ISSN

2457-8126

MR

?

ZBL

?

Google Scholar citations

[1] M.P. Anselone and L.B. Rall, The Solution of Characteristic Value-Vector Problems by  Newton Method, Numer. Math. 11 (1968), 38-45.
[2] E. Catinas and I. Pavaloiu, On the Chebyshev method for approximating  the eigenvalue  of linear operators, Rev. Anal. Numer.  Theor. Approx., 25, 1-2 (1996), 43-56.
post
[3] E. Catinas and I. Pavaloiu, On a Chebyshev-type  Method for approximating  the solutions of Polynomial  Operator Equations  of Degree 2.  Proceedings of the International  Conference  on Approximation and  Optimization, Cluj-Napoca, July 29-August 1, 1996, Vol. 1, 219-226.
[4] E. Catinas and I. Pavaloiu, On approximating the eigenvalues and eigenvectors of linear continuous operators, Rev. Anal. Numer. Theor. Approx. 26 1-2 (1997),  19-27.
post
[5] E. Catinas and I. Pavaloiu, On some interpolatory iterative methods for the second degree polynomial operators (I).  Rev. Anal. Numer. Theor. Approx., to appear.
post
[6] F. Chatelin,  Valeurs propres  de matrices. Mason, Paris, 1988.
[7]  L. Collatz, Functionalanalysis   und Numerische  Mathematic. Springer-Verlag, Berlin,  1964.
 [8] J.J. Dongarra,  C.B. Moler and J.H. Wilkinson, Improving the Accurarcy  of the Computed  Eigenvalues and  Eigenvectors.  SIAM J.  Numer. Anal., 20 1 (1983), 23-45.
[9] S. M. Grzegórski, On the Scaled Newton Method for the Symmetric Eigenvalue Problem. Computingh, 45 (1990), pp.277-282.
[10] V. S, Kartîşov and L. Iuhno, O nekotorîh Modifikatiah Metoda Niutona dlea Resenia Nelineinoi Spektralnoi Zadaci. J. Vîcisl. matem. i matem. fiz.,33 9 (1973), pp. 1403-1409 (in Russìan).
[11] M. L. Krasnov, lntegral Equations, Theoretical Introduction, Nauka Moskow, 1975 (in russian).
[12] J. M. Ortega and W. C Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press. New York. 1970
[13] C. Peters and J H. Wilkison. Inverse Iteration, III-Conditioned Equations and Newton’s Method, SIAM Review.21. 3 (1979), pp.339-360.
[14] M. C. Santos. A Note on the Newton Iteration for the Algebraic Eigenvalue Problem, SIAM J. Matrix Anal. Appl., 9 4 (1988), pp.561-569.
[15] R. A. Tapia und L. D. Whitley. The Projected Newton Method has Order 1+√2 for the Symmetric Eigenvalue Problem, SIAM J. Numer. Anal. 25,6 (1988), pp.1376-1382.
[16] S. Ul’m,  On the Iterative Method with Simultaneous Approximation of the Inverse of the Operator, Izv. Acad. Nauk. Estonskoi S.S.R., 16 4 (1967), pp. 40-411.
[17] K. Wu. Y. Saad and A. Stathopoulos, Inexact Newton Preconditioning Techniques for Eigenvalue Problems, Lawrence Berkeley National Laboratory report number 41382 and Minnesota Supercomputing Institute report number UMSI 98-10, 1998.
[18] T. Yamamoto, Error Bounds for Computed Eigenvalues and Eigenvectors, Numer. Math., 34 (1980), pp.189-199.

Paper (preprint) in HTML form

On some Interpolatory Iterative Methods for the Second Degree Polynomial Operators (II)

On some Interpolatory Iterative Methods
for the Second Degree Polynomial Operators (II)

Emil Cătinaş and Ion Păvăloiu

1. The approximation of the solutions of Riccati differential equations

1991 AMS Subject Classification: 65J20, 65H17.

Consider the differential equation

(1) y=a0(x)y2+a1(x)y+a2(x)

where the applications a0,a1,a2:[a,b] are continuous on the interval [a,b], a<b and a2 is not the null function.

We are interested to approximate the solution of (1) with the condition

(2) y(a)=ya.

The solution will be searched in the space C1[a,b]. For this purpose we consider the operator F:C1[a,b]C[a,b] given by

(3) F(y)(x)=y(x)a0(x)y2(x)a1(x)y(x)a2(x).

The first and second order divided differences of F are given by

(4) [y1,y2;F]h(x)=h(x)(a0(x)(y1(x)+y2(x))+a1(x))h(x)

respectively

(5) [y1,y2,y3;F]h(x)k(x)=a0(x)h(x)k(x),

for all y1,y2,y3,h,kC1[a,b].

Formula (5) shows that the divided differences of order higher than two are the null multilinear operators.

Let y0,y1C1[a,b] be given. The chord iterates ykC1[a,b], k=2,3, are constructed by

(6) [yk1,yk;F](yk+1yk)+F(yk)=0,k=1,2,

As we shall see, the above equations lead to some first order linear nonhomogeneous differential equations.

Replacing in (6) the expression (4) of F we get for yk+1 the following differential equation.

(7) yk+1(x)=φk(x)yk+1(x)+ψk(x),

where we have denoted

φk(x) = a0(x)(yk1(x)+yk(x))+a1(x)and
ψk(x) = a0(x)yk(x)yk1(x)+a2(x).

For solving (7) consider the condition

(8) yk+1(a)=ya,

so we are led to the expression of yk+1:

(9) yk+1(x)=exp(axφk(s)𝑑s)(ya+axψk(s)exp(asφk(t)𝑑t)𝑑s).

We are interested to find a bound for [y0,y1;F]1.

For this purpose consider the problem

{[y0,y1;F]h(x)=u(x),uC[a,b]h(a)=0

which has the solution

h(x)=exp(axφ1(s)𝑑s)axu(s)exp(asφ1(t)𝑑t)𝑑s.

This equality yields

(10) [y0,y1;F]1supx[a,b]exp(axφ1(s)𝑑s)axexp(asφ1(t)𝑑t)𝑑s.

For the second order divided differences we have

(11) [u,v,w;F]supx[a,b]|a0(x)|.

Now denote

b¯0 = supx[a,b]exp(axφ1(s)𝑑s)axexp(asφ1(t)𝑑t)𝑑s;
α¯ = supx[a,b]|a0(x)|
d¯0 = supx[a,b]|y1(x)y0(x)|
B¯r¯(y0) = {yC1[a,b]|supx[a,b]|y(x)y0(x)|r¯}.

With the above notations and taking into account relations (10) and (11), Theorem 4.1 from [6] implies:

Theorem 1.1.

Assume the functions y0,y1C1[a,b],a0,a1,a2C[a,b] and the real positive numbers b¯0, α¯, d¯0 and r¯ satisfy the following conditions:

a) b¯0α¯(2r¯+d¯0)=q<1;

b) ε0=α¯b¯supx[a,b]|F(y0)(x)|<1,α¯b¯supx[a,b]|F(y1)(x)|ε0l, where b¯=b¯01q and l=1+52;

c) ε0lα¯b¯(1ε0)+d¯0r¯.

Then the sequence (yk)k0 given by (9) converges uniformly and its elements lie in the ball B¯r¯(y0). Denoting y=limkyk then y is a solution of (1)-(2). Moreover, the following estimation hold:

supx[a,b]|y(x)yk(x)|ε0lkα¯b¯(1ε0l),k=2,3,

2. The approximation of the solutions of Fredholm integral equations

Consider the integral equation

(12) φ(t)=λaba1(t,s)φ(s)𝑑s+μaba2(t,s)φ2(s)𝑑s+f(t),

where λ,μ, a1,a2C[a,b]2 and fC[a,b].

Here we take the operator F:C[a,b]C[a,b] given by

(13) F(φ)(t)=φ(t)λaba1(t,s)φ(s)𝑑sμaba2(t,s)φ2(s)𝑑sf(t)

The first and the second order divided differences of F on u,v,wC[a,b] are given by the following relations:

(14) [u,v;F]h(t)=h(t)λaba1(t,s)h(s)𝑑sμaba2(t,s)[u(s)+v(s)]h(s)𝑑s[u,v,w;F]h(t)k(t)=μaba2(t,s)h(s)k(s)𝑑s

where h,kC[a,b].

Let φ0,φ1C[a,b] be given. As in the previous section, we shall construct the sequence (φk)k0C[a,b] with the solutions of the following linear integral equations

(15) [φk1,φk;F](φk+1φk)+F(φk)=0,k=1,2,

From (15), by (13) and (14) we get for φk+1 the following linear integral Fredholm equation

(16) φk+1(t)= λaba1(t,s)φk+1(s)𝑑s
+μaba2(t,s)(φk1(s)+φk(s))φk+1(s)𝑑s+Gk(t)

where

(17) Gk(t)=f(t)μaba2(t,s)(φk1(s)+φk(s))𝑑s.

In order to consider Theorem 4.1 [6] for the convergence of the sequence (φk)k0 we need a bound for the operator [φ0,φ1;F]1. In this respect we take the following equation

[φ0,φ1;F]h(t)=u(t), with λ0,

i.e.

(18) h(t)=λaba1(t,s)h(s)𝑑s+μaba2(t,s)(φ0(s)+φ1(s))h(s)𝑑s+u(t).

We shall assume that this equation has a unique solution. Denote K0(t,s,λ,μλ) the corresponding ”resolvent” kernel. Then the solution of (18) has the following form:

(19) h(t)=u(t)+λabK0(t,s,λ,μλ)u(s)𝑑s.

It can be easily seen by the above considerations that the following hold:

(20) [φ0,φ1;F]11+|λ|(ba)supat,sb|K0(t,s,λ,μλ)|.

At the same time, the norm of [u,v,w;F] is bounded by

[u,v,w;F]|μ|(ba)supat,sb|a2(t,s)|.

We make the following notations:

b~0 = 1+|λ|(ba)supat,sb|K0(t,s,λ,μλ)|
α~ = |μ|(ba)supat,sb|a2(t,s)|
d~0 = supt[a,b]|φ1(t)φ0(t)|
B¯r~(φ0) = {φC[a,b]|supt[a,b]|φ(t)φ0(t)|r~}

Theorem 4.1 from [6] then implies the next result.

Theorem 2.1.

Assume the functions a1,a2C([a,b]2) and the real numbers λ,μ,b~0,α~,d~0 and r~ satisfy the following conditions:

a) b~0α~(2r~+d~0)=q~<1;

b) ε0=α~b~2F(φ0)<1,ρ1=α~b~F(φ1)ε0l, where
F(φi)=supt[a,b]|F(φi)(t)|,i=1,2;b~=b~01q~ and l=1+52;

c) ε0lα~b~(1ε0)+d~0r~.

Then the sequence (φk)k0 generated by (16) converges uniformly and its elements lie in B¯r~(φ0). Denoting φ=limkφk then φ is a solution of the integral equation (12) and the following estimations hold:

supt[a,b]|φ(t)φk(t)|ε0lkα~b~(1ε0l),k=2,3,

As we have seen, the chord method requires the solving of a linear integral Fredholm equation at each iteration step. The problem takes a simplified form from the practical viewpoint when the kernels a1 and a2 from (12) are degenerate. In this case the linear integral equations (16) will be also with degenerate kernels, as can be easily seen. We shall consider in the following this particular case.

Let αi,βiC[a,b], i=1,,p two sets containing p linear independent functions and also γi,δiC[a,b], i=1,,m two other sets with the same properties.

Assume the kernels a1 and a2 have the following form:

a1(t,s) = i=1pαi(t)βi(s)and
a2(t,s) = i=1mγi(t)δi(s).

It can be verified without difficulty that the solution φk+1 of equation (16) has the form

(21) φk+1(t)=λi=1pαi(t)abβi(s)φk+1(s)𝑑s++μi=1mγi(t)abδi(s)(φk1(s)+φk(s))φk+1(s)𝑑s+Gk(t).

Consider the following notations:

xi(k+1)=abβi(s)φk+1(s)𝑑s,i=1,,pyi(k+1)=abδi(s)(φk1(s)+φk(s))φk+1(s)𝑑s,i=1,,maij=abαi(t)βj(t)𝑑t,i,j=1,,pbij=abγi(t)βj(t)𝑑t,i=1,,m,j=1,,pAji(k+1)=abαi(t)δj(t)(φk(t)+φk1(t))𝑑t,i=1,,p,j=1,,mBji(k+1)=abγi(t)δj(t)(φk(t)+φk1(t))𝑑ti,j=1,,mθj(k+1)=abβj(t)Gk(t)𝑑tj=1,,pεj(k+1)=abδj(t)(φk(t)+φk1(t))Gk(t)𝑑tj=1,,m

We obtain then for φk+1 the expression

(22) φk+1(t)=λi=1pxi(k+1)αi(t)+μi=1myi(k+1)γi(t)+Gk(t),

where xi(k+1), i=1,,p and yi(k+1), i=1,,m represent the solution of the linear system

(23) X(k+1) = λUX(k+1)+μVY(k+1)+θ(k+1)
Y(k+1) = λW(k+1)X(k+1)+μT(k+1)Y(k+1)+ε(k+1)

where

U = (aji)1i,jp,V=(bji)1ip1jm,
W(k+1) = (Aji)1ip1jm,T(k+1)=(Bji)1i,jm,
X(k+1) = (x1(k+1),x2(k+1),,xp(k+1))T,
Y(k+1) = (y1(k+1),y2(k+1),,ym(k+1))T,
θ(k+1) = (θ1(k+1),θ2(k+1),,θp(k+1))T,
ε(k+1) = (ε1(k+1),ε2(k+1),,εm(k+1))T.

It can be easily seen that under the conditions of Theorem 2.1 the sequences (Xk)k0 and (Yk)k0 converge.

3. The approximation of the eigenpairs of matrices

Denote V=𝕂n and let A 𝕂n×n where 𝕂= or . As we have seen in [6], for computing the eigenpairs of A we may consider a mapping G:V𝕂 with G(0)1. The eigenvalues and eigenvectors are the solutions of the nonlinear system

(24) F(x)=(AvλvG(v)1)=0,

where x=(vλ)V×𝕂=𝕂n+1. Denoting v=(x(1),x(2),,x(n)) and λ=x(n+1) then the first n components of F, Fi, i=1,,n, are given by

Fi(x)=ai1x(1)++ai,i1x(i1)+(aiix(n+1))x(i)+ai,i+1x(i+1)++ainx(n).

If we take G as

G(v)=av2

for some fixed a then

Fn+1(x)=a((x(1))2++(x(n))2)1.

The matrices associated to the first order divided differences of F at the points x1,x2𝕂n+1 are

[x1,x2;F]=(b11a12a1na1,n+11a21b22a2na2,n+11an1an2bnnan,n+11an+1,1an+1,2an+1,n0)

where bii=aii12(x2(n+1)+x1(n+1)), ai,n+11=12(x2(i)+x1(i)) and an+1,i=a(x1(i)+x2(i)) for i=1,,n.

The second order divided differences of F on x1,x2,x3 are given by

[x1,x2,x3;F]hk=(12k(n+1)0012k(1)012k(n+1)012k(2)0012k(n+1)12k(n)ak(1)ak(2)ak(n)0)(h(1)h(2)h(n)h(n+1))

for all h,k𝕂n+1.

We shall consider the max norm on V and taking x=max{v,|λ|} for all x=(vλ)𝕂n+1 we are led to the max norm on 𝕂n+1. It can be easily verified that

[x1,x2,x3;F]max{1,n|a|}.

Let x0,x1𝕂n+1 be such that [x0,x1,F] is nonsingular. Denote b^0=[x0,x1;F]1, d^0=x0x1, α^=max{1,n|a|} and
B¯r^={x𝕂n+1|xx0r^}. Applying Theorem 4.1 from [6] we get

Theorem 3.1.

Assume that the matrix [x0,x1;F] is nonsingular and the numbers b^0,d^,α^ and r^ satisfy :

a) b^0(2r^+d^0)=q^<1;

b) ε0=b2F(x0)<1,b2F(x0)ε0l where b=b^01q^ and l=1+52;

c) ε0lb(1ε0)+d^0r^.

Then the sequence (xk)k0 given by the iterations [xk1,xk;F](xk+1xk)+F(xk)=0, k=1,2, lie in the ball B¯r^(x0) and converges. Denoting x=limkxk then F(x)=0 and the following estimations hold

xxkε0lkb(1ε0l),k=2,3,

4. Numerical examples

We shall consider two test matricesThese matrices are available from MatrixMarket at the following address: http://math.nist.gov/MatrixMarket/. in order to study the behavior of the chord method for approximating the eigenpairs. The programs were written in MatlabMATLAB is a registered trademark of the MathWorks, Inc. and were run on a PC.

Pores1 matrix. This matrix arise from oil reservoir simulation. It is real, unsymmetric, of dimension 30 and has 20 real eigenvalues. We have chosen to study the largest eigenvalue λ=1.836 310+1. The initial approximations were taken λ0=λ+0.5 and λ1=λ+0.25; for the initial vector v0 we perturbed the solution v (computed by Matlab and then properly scaled to fulfill the norming equation) with random vectors having the components uniformly distributed on (-ε,ε), ε=0.2; for the vector v1 we halved the perturbation. The following results are typical for the runs made (we have considered here the same vector ε for the four initial approximations), where for choice I we took in G a=12, while for choice II a=12n.

Fidap002 matrix. This real symmetric matrix of dimension n=441 arise from finite element modeling. Its eigenvalues are all simple and range from

Choice I Choice II
k xxk F(xk) xxk F(xk)
0 8.533 310-01 1.793 810+06 8.533 310-01 1.793 810+06
1 4.266 710-01 8.969 110+05 4.266 710-01 8.969 010+05
2 7.860 110-02 2.060 910-01 1.624 510-02 1.722 110-01
3 6.683 110-03 9.518 010-03 2.908 610-04 1.967 710-03
4 1.764 710-04 2.500 710-04 2.978 310-07 2.140 510-06
5 2.370 610-07 3.345 810-07 7.586 210-11 9.143 010-10
6 4.662 410-10 8.231 710-10

Table 1. The chord method for Pores1.

7108 to 3106. We have chosen to study the smallest eigenvalue, which is well separated. The initial approximations were taken λ0=λ+102=6.9996108+100, resp. λ1=λ+10; for the initial vector v0 we perturbed the solution v with random vectors having the components uniformly distributed on (-ε,ε), ε=0.5; for the vector v1 we halved the perturbation. The following results are typical for the runs made (we have considered a common vector ε).

Choice I Choice II
k xxk F(xk) xxk F(xk)
0 1.000 310+2 1.746 710+09 2.510 710+00 1.746 710+09
1 1.007 810+1 8.733 510+08 1.255 310+00 8.733 510+08
2 3.485 310+1 1.646 210+02 5.287 710-02 3.564 110-03
3 2.136 810+0 2.337 310+01 6.180 410-05 4.904 810-06
4 4.476 110-1 6.352 110-01 5.985 810-07 4.430 910-06
5 6.521 410-3 9.223 810-03
6 1.561 710-5 2.296 110-05
7 5.960 510-7 8.564 710-08

Table 2. The secant method for Fidap002.

References

Received: March 3, 1998.

Romanian Academy

P.O. Box 68

3400 Cluj-Napoca 1

Romania

1999

Related Posts