MATHEMATICA - REVUE D'ANALYSE
NUMERIQUE ET DE THÉORIE
DE LAPPRROXIMATION
NUMERIQUE ET DE THÉORIE
DE LAPPRROXIMATION
L'ANALYSE NUMÉRIQUE
ET
LA THÉORIE DE L'APPROXIMATION
TOME 9
1
1980
1980
MATHEMATICA - REVUE D'ANALYSE
NUMÉRIQUE ET DE THÉORIE
DE L'APPROXIMATION
NUMÉRIQUE ET DE THÉORIE
DE L'APPROXIMATION
L'ANALYSE NUMÉRIQUE
ET
ET
LA THEORIE DE
L'APPROXIMATION
Tome 9,
1980
1980
COMITE DE REDACTION
OlegArąmă (sécretaire de rédaction), Gheorghe Coman, Sever Groze, Adolf Haimovici, Acad. Caius Iacob (rédacteur en chef), Dumitru V. Ionescu, Carol Kalik, Arpád Pál, Elena Popoviciu (rédacteur en chef adjoint), Ioan A. Rus, Dimitrie Stancu, Acad. Nicolae Teodorescu.
SOMMAIRE
G. Avdelas, A. Hadjidimos and A. Yeyios, Some Theoretical and Computational Results Concerning the Accelerated Overrelaxation (AOR) Method
Wolfgang W. Breckner, Eine Verallgemeinerung des Prinzips der Gleichmässigen Beschränktheit
S. Cobzaş, Non Convex Optimization Problems on Weakly Compact Subsets of Banach Spaces . . .
S. Cobzaş, Non Convex Optimization Problems on Weakly Compact Subsets of Banach Spaces . . .
Adrian Dadu, Une méthode d'accélération de la convergence des séries trigonométriques
F. J. Delvos and H. Posdorf, A Boolean Method in Bivariate Interpolation
C. Kalik, Un théorème d'existence pour une inéquation variationnelle
F. J. Delvos and H. Posdorf, A Boolean Method in Bivariate Interpolation
C. Kalik, Un théorème d'existence pour une inéquation variationnelle
Mircea Ivan, Différences divisées généralisées et fonctionnelles de forme simple
I. Kolumbán. Das Prinzip der Kondensation der Singularitäten präkonvexer Funktionen
I. Kolumbán. Das Prinzip der Kondensation der Singularitäten präkonvexer Funktionen
Walter Kölnnen, Einige Saturationssätze für n-parametrige Halbgruppen von Operatoren
Octavian Lipovan, Sur l'intégrabilité des fonctions multivoques 75
Alexandru Lupaş, Numerical Integration by means of Gauss-Legendre Formula
C. Mustăta, The Extension of Starshaped Bounded Lipschitz Functions 93
Andrei Ney, La -métrique, instrument d'approximation dans un clan (I)
Elena Popoviciu, Sur un théorème de la moyenne ..... 107
Radu Precup, Le théorème des contractions dans des espaces syntopogènes ..... 113
I. Ras a, On Some Results of C. A. Micchelli ..... 125
Stefan Tigan, Remarques sur certains problèmes de programmation psendo-linéaire par morceaux ..... 129
Ch. Ullrich, Intervallrechnung über vollständig schwachgeordneten Vektoiden133
Alexandru Lupaş, Numerical Integration by means of Gauss-Legendre Formula
C. Mustăta, The Extension of Starshaped Bounded Lipschitz Functions 93
Andrei Ney, La
Elena Popoviciu, Sur un théorème de la moyenne ..... 107
Radu Precup, Le théorème des contractions dans des espaces syntopogènes ..... 113
I. Ras a, On Some Results of C. A. Micchelli ..... 125
Stefan Tigan, Remarques sur certains problèmes de programmation psendo-linéaire par morceaux ..... 129
Ch. Ullrich, Intervallrechnung über vollständig schwachgeordneten Vektoiden133
SOME THEORETICAL AND COMPUTATIONAL RESULTS CONCERNING THE ACCELERATED OVERRELAXATION (AOR) METHOD
1. Introduction
Very recently HADJIDIMOS (1978) has introduced a new iterative method for the numerical solution of a linear system , where is an known matrix, an unknown -dimensional vector and a known vector of the same dimension. By splitting into the sum - , where is the diagonal part of and and the strictly lower and upper triangular parts of multiplied by -1 and assuming that det , the corresponding AOR scheme has the following form:
(1.1)
where is the unit matrix of order is the acceleration parameter, is the overrelaxation parameter and arbitrary. The iterative matrix of scheme (1.1) is given by
(1.1)
where
This new two-parameter method is obviously a generalization of the sor method (since for AOR coincides with SOR) and it should be noted that for the AOR method is the Extrapolated SOR (ESOR) method with overrelaxation parameter and extrapolation one . Without loss of generality we shall assume throughout this paper that since by premultiplication by the new coefficient matrix of the original system will have this form.
The purpose of this paper is to present some further basic results concerning the AOR method when the matrix is
i) An irreducible with weak diagonal dominance matrix
ii) A positive definite matrix
iii) An -matrix and
iv) An -matrix
and also to show by mumerical examples its superiority compared with the SOR method.
i) An irreducible with weak diagonal dominance matrix
ii) A positive definite matrix
iii) An
iv) An
and also to show by mumerical examples its superiority compared with the SOR method.
2. A Basic Theorem
Here we state and prove a basic theorem which helps us to extend and improwe the results of hadjidimos (1978).
theorem 1. Let be a nonsingular matrix. If the sor method with overrelaxation parameter converges for , then the aor method converges for all and such that and where .
theorem 1. Let
Proof. By relationship (2.7) (hadjidimos, 1978) the eigenvalues of the AOR method ( ) are given in terms of the eigenvalues of the sor metod by the expressions where . If with , then for and we have that
and the AOR method converges for all and . In what follows we examine the extensions concerning the theory of aor method as was given by HADJIDIMOS (1978) in view of the theorem which has just been proved.
3. Irreducible Matrices with Weak Diagonal Dominance
theorem 1. Let be an irreducible matrix with weak diagonal dominance. Then the AOR method ( ) converges for all and .
Proof. In all basic books (see varga, 1962; wachspress, 1966 ; voung, 1971) a classical theorem concerning the sor method is presented stating that the SOR method converges for . Thus by Theorem 2.1 above we conclude that the AOR method converges for all and . (i.e. ). This is a corollary to the theorem of section 3 (hadjidimos, 1978) according to which the aor method converges for all and .
4. Positive Definite Matrices
theorem 1. Let be a positive definite (real) matrix. Then the aor method converges for .
Proof. Since we have and also that for every complex -dimensional vector with denoting the conjugate transpose of . If is an eigenvalue of , then for some we shall have and hence
Multiplying both sides on the left by and solving for , assuming for the moment that , we obtain
Since we have
where is the complex conjugate matrix of .
Thus if we let
Thus if we let
If we put where and are real we have
To prove the validity of the theorem it is sufficient to show that for or equivalently that
Since we must show that
Using (4.1) and (4.2) we have that
since is positive definite. This gives that . Relationship (4.3) is now equivalent to
which, we observe, is satisfied for , since . For (4.4) gives which is also valid. For we have to distinguish cases. Thus if becomes which is readily seen to hold. If then we have
Thus and the convergence follows. It remains to be proved that . For this we assume that . Then it will be
which implies that and . Since we have that and . But or equivalently which is not possible because .
Remark. By theorems 3.6 (young, 1971, p. 113) and 2.1 of this paper we can show the convergence of the AOR method for and , where . This, however does not include the case . So the theorem 4.1 of the present section is more general.
5. L-and M-Matrices
THEOREM 1. Let be an -matrix. Then the aor method converges for all and , where if and only if the Jacobi method converges .
Proof. If then by theorem 5.1 (a) (young, 1971, p. 120) we have that the sor method converges for and therefore by theorem 2.1 of this paper the AOR method converges for and . Conversely, if the AOR method converges for and , then for we have that the sor method converges for . Therefore theorem 5.1 (a) (young, 1971, p. 120) implies that .
If we combine the theorem above with the theorem of section 4 (hadjidimos, 1978) we conclude the following more general theorem for -matrices.
THEOREM 2. Let be an -matrix. Then the aor method converges for al and if and only if the Jacobi method coverges .
theorem 3. Let be an -matrix. Then each of the following statements is equivalent to the other two.
theorem 3. Let
S1. The Jacobi method converges.
S2. The SOR method converges for .
S3 The AOR method converges for and .
Proof. From S1| we can easily go to S2| and S3| by using theorem 5.2 above. By theorems 5.1 (a) (voung, p. 120) and 5.2 of this paper implies and . Finally implies for and for .
theorem 4. Let be an -matrix. Then, the aor method converges for and if and only if is an -matrix.
S2. The SOR method converges for
S3 The AOR method converges for
Proof. From S1| we can easily go to S2| and S3| by using theorem 5.2 above. By theorems 5.1 (a) (voung, p. 120) and 5.2 of this paper
theorem 4. Let
Proof. If is an -matrix then by theorem 7.2 (young, p.43) we have and by theorem 5.9 (young, p. 126) we conclude that the sor method converges for . Hence by theorem 2.1 of this paper the AOR method converges for and . Conversely, if the AOR method converges for and this implies that the sor method converges for . If we assume that then and by theorem 5.1 (c) (yOUNG, p. 121) we have that the sor method does not converge for any value of such that which is not true. Thus we have and therefore is an -matrix according to theorem 7.2. (young, p. 43).
THEOREM 5. Let be an -matrix. Then the aor method converges for and and also for and .
Proof: The proof follows by Theorems 5.2 and 5.4.
6. Numerical examples
The following examples ensure the validity of all theorems presented previously and in addition show the asymptotical superiority of the aOR method, compared with the sor one. For this, we worked out specific examples and found out the optimum spectral radii of the corresponding iterative matrices in all cases considered. We have, to the parameters and all the values and respectively and independently of each other. The upper bound 2 was selected because all theorems of this paper give only sufficient conditions for the convergence of the AOR method and not necessary ones as well while in addition to that the theorem of Kahan states that if the sor method converges, then it must be .
Using the UNIVAC 1106 Computer of the University of Salonica and with a maximum permissible relative error , in finding the spectral radii, we have found that:
i) In the case where is the following irreducible with weak diagonal dominance matrix
i) In the case where
the optimum spectral radius of AOR method has been found for , and is while the optimum spectral radius of sor method is given for and is .
ii) In the case where is the following positive definite (real) matrix
ii) In the case where
it has been found that for and while for .
iii) In the case where is the following -matrix with , that is, is an -matrix
iii) In the case where
we have for and for .
7. Final remarks
As has been seen from the numerical examples we gave in the previous section, it is always , that is, the AOR method converges faster than the corresponding sor one. This simply suggests that the AOR method should be used in the place of the sor method, whenever the latter is used.
REFERENCES
[1] Hadjidimos, A., Math. Comp. 32, 149-157 (1978).
[2] Varga, R. S., Matrix Iterative Analysis. Prentice-Hall, Englewood Cliffs, 1962.
[3] Wachspress, E.L., Iterative Solution of Elliptic Systems and Application to the Neutron Diffusion Equations of Reactor Physics. Prentice-Hall, Englewood Cliffs, 1966.
[4] Young, D. M., Iterative Solution of Large Linear Systems. Academic Press, New York and London, 1971.
[2] Varga, R. S., Matrix Iterative Analysis. Prentice-Hall, Englewood Cliffs, 1962.
[3] Wachspress, E.L., Iterative Solution of Elliptic Systems and Application to the Neutron Diffusion Equations of Reactor Physics. Prentice-Hall, Englewood Cliffs, 1966.
[4] Young, D. M., Iterative Solution of Large Linear Systems. Academic Press, New York and London, 1971.
Copyright (c) 2015 Journal of Numerical Analysis and Approximation Theory

This work is licensed under a Creative Commons Attribution 4.0 International License.







