On the approximate solution of systems of infinite linear equations

Abstract

Authors

B. Janko
Institutul de Calcul

Keywords

?

Paper coordinates

B. Janko, Despre rezolvarea aproximativă a sistemelor de ecuaţii liniare infinite (1959), vol.10, nr.2,p. 317-327

PDF

About this paper

Journal

Studii si Cercetari Matematice

Publisher Name

Academy of the Republic of S.R.

Print ISSN
Online ISSN

google scholar link

??

scm,+1959_202_20Janko
Original text
Rate this translation
Your feedback will be used to help improve Google Translate

ON THE APPROXIMATE SOLVING OF INFINITE LINEAR EQUATION SYSTEMS

OFJANKÓ BÉLAPaper presented at the 4th Congress of Romanian Mathematicians, inMay 27 - June 4, 1956, Bucharest.

The aim of the present work is to give new methods for the numerical solution of systems of linear equations, generalizing the known methods of Po11aczek-Geiringer and Seidel. The basic idea in constructing these methods is to transform the given linear system into an equivalent nonlinear system, of a particular form and of such a nature that we have the following condition fulfilled: applying to this nonlinear system Newton's method [ 1,2 ] (generalized by LV Kantorovichi), the Fréchet derivative of the nonlinear operation is a matrix of simple form, for example of diagonal, triangular form, etc. [3].

§ 1. New iteration methods for solving systems of equations

We consider a countable system of relations
A and 1 x 1 + A and 2 x 2 + + A and j x j + = b and ( j = 1 , 2 , ; and = 1 , 2 , ) A and 1 x 1 + A and 2 x 2 + + A and j x j + = b and ( j = 1 , 2 , ; and = 1 , 2 , ) a_(i1)x_(1)+a_(i2)x_(2)+dots+a_(ij)x_(j)+dots=b_(i)quad(j=1,2,dots;i=1,2,dots)a_{i 1} x_{1}+a_{i 2} x_{2}+\ldots+a_{ij} x_{j}+\ldots=b_{i} \quad(j=1,2, \ldots ; i=1,2, \ldots)Aand1x1+Aand2x2++Aandjxj+=band(j=1,2,;and=1,2,), (1.1) in which the coefficients A and j A and j a_(ij)a_{ij}Aandjand free terms b and b and b_(i)b_{i}bandare given real numbers, and the numbers x j x j x_(j)x_{j}xjare the unknowns. We understand by a solution of the system (1.1) a series of values { x and } x and {x_(i)}\left\{x_{i}\right\}{xand}, which substituted into the first members of the equations, form convergent series whose sums are equal to the corresponding free terms. We now consider the infinite system in the form
( ) j = 1 A and j x j b and = 0 ( and = 1 , 2 , ) , ( ) j = 1 A and j x j b and = 0 ( and = 1 , 2 , ) , {:('")"sum_(j=1)^(oo)a_(ij)x_(j)-b_(i)=0quad(i=1","2","dots)",":}\begin{equation*} \sum_{j=1}^{\infty} a_{ij} x_{j}-b_{i}=0 \quad(i=1,2, \ldots), \tag{$\prime$} \end{equation*}()j=1Aandjxjband=0(and=1,2,),
which we will denote in the following as follows:
A and and x and + A and = 0 , A and = j = 1 A and j x j b and ( and = 1 , 2 , ) . A and and x and + A and = 0 , A and = j = 1 A and j x j b and ( and = 1 , 2 , ) . a_(ii)x_(i)+A_(i)=0,quadA_(i)=sum_(j=1)^(oo)a_(ij)x_(j)-b_(i)quad(i=1,2,dots).a_{ii} x_{i}+A_{i}=0, \quad A_{i}=\sum_{j=1}^{\infty} a_{ij} x_{j}-b_{i} \quad(i=1,2, \ldots) .Aandandxand+Aand=0,Aand=j=1Aandjxjband(and=1,2,).
The linear system (1'. 1) is equivalent to the nonlinear system of the form
(2.1) F and ( x and ; A and , C and ( 0 ) ) = Φ and ( A and , C and ( 0 ) ) ( A and and x and + A and ) = 0 ( and = 1 , 2 , ) , (2.1) F and x and ; A and , C and ( 0 ) = Φ and A and , C and ( 0 ) A and and x and + A and = 0 ( and = 1 , 2 , ) , {:(2.1)F_(i)(x_(i);A_(i),C_(i)^((0)))=Phi_(i)(A_(i),C_(i)^((0)))(a_(ii)x_(i)+A_(i))=0quad(i=1","2","dots)",":}\begin{equation*} F_{i}\left(x_{i} ; A_{i}, C_{i}^{(0)}\right)=\Phi_{i}\left(A_{i}, C_{i}^{(0)}\right)\left(a_{ii} x_{i}+A_{i}\right)=0 \quad(i=1,2, \ldots), \tag{2.1} \end{equation*}(2.1)Fand(xand;Aand,Cand(0))=Φand(Aand,Cand(0))(Aandandxand+Aand)=0(and=1,2,),
Φ and Φ and Phi_(i)\Phi_{i}Φandbeing a bounded function that has no real roots and is of simple form 1 1 ^(1){ }^{1}1), and the constant C and ( 0 ) C and ( 0 ) C_(i)^((0))C_{i}^{(0)}Cand(0)is determined in the same way as for the initial approximation X 0 = ( x 1 0 , x 2 0 , , x and 0 , ) X 0 = x 1 0 , x 2 0 , , x and 0 , X_(0)=(x_(1)^(0),x_(2)^(0),dots,x_(i)^(0),dots)X_{0}=\left(x_{1}^{0}, x_{2}^{0}, \ldots, x_{i}^{0}, \ldots\right)X0=(x10,x20,,xand0,)to have
(3.1) ( F and x j ) 0 = 0 , and j (3.1) F and x j 0 = 0 , and j {:(3.1)((delF_(i))/(delx_(j)))_(0)=0","quad i!=j:}\begin{equation*} \left(\frac{\partial F_{i}}{\partial x_{j}}\right)_{0}=0, \quad i \neq j \tag{3.1} \end{equation*}(3.1)(Fandxj)0=0,andj
It is easy to verify that conditions (3.1) are equivalent to
( F and A and ) 0 = 0 ( and = 1 , 2 , ) F and A and 0 = 0 ( and = 1 , 2 , ) ((delF_(i))/(delA_(i)))_(0)=0quad(i=1,2,dots)\left(\frac{\partial F_{i}}{\partial A_{i}}\right)_{0}=0 \quad(i=1,2, \ldots)(FandAand)0=0(and=1,2,)
Constructing the nonlinear operation
F ( x ) ( F 1 ( x 1 ; A 1 , C 1 ( 0 ) ) F and ( x and ; A and , C and ( 0 ) ) ) F ( x ) F 1 x 1 ; A 1 , C 1 ( 0 ) F and x and ; A and , C and ( 0 ) F(x)-=([F_(1)(x_(1);:},{:A_(1),C_(1)^((0)))],[,vdots],[F_(i)(x_(i);:},{:A_(i),C_(i)^((0)))],[,vdots])F(x) \equiv\left(\begin{array}{cc} F_{1}\left(x_{1} ;\right. & \left.A_{1}, C_{1}^{(0)}\right) \\ & \vdots \\ F_{i}\left(x_{i} ;\right. & \left.A_{i}, C_{i}^{(0)}\right) \\ & \vdots \end{array}\rightF(x)(F1(x1;A1,C1(0))Fand(xand;Aand,Cand(0)))
we can easily convince ourselves that the derivative in the Fréchet sense of the nonlinear operation F F FFFat the point X 0 X 0 X_(0)X_{0}X0is of the form
F ( x 0 ) ( ( F 1 x 1 ) 0 0 0 0 ( F 2 x 2 ) 0 0 . . 0 0 ( F and x and ) 0 . . . . ) F x 0 F 1 x 1 0 0 0 0 F 2 x 2 0 0 . . 0 0 F and x and 0 . . . . F^(')(x_(0))-=([((delF_(1))/(delx_(1)))_(0),0,dots,0,dots],[0,((delF_(2))/(delx_(2)))_(0),dots,0 ,dots],[.,dots,dots,.,dots,dots],[0,0,dots,((delF_(i))/(delx_(i)))_(0),dots],[.,dots,dots,.,.,.])F^{\prime}\left(x_{0}\right) \equiv\left(\begin{array}{cccccc} \left(\frac{\partial F_{1}}{\partial x_{1}}\right)_{0} & 0 & \ldots & 0 & \ldots \\ 0 & \left(\frac{\partial F_{2}}{\partial x_{2}}\right)_{0} & \ldots & 0 & \ldots \\ 0 & \ldots & \ldots & \ldots \\ 0 & \left(\frac{\partial x_{i}})_{0} & \ldots & . \end{array}F(x0)((F1x1)0000(F2x2)00..00(Fandxand)0....)
Here I noted
X = ( x 1 , x 2 , , x and , ) X = x 1 , x 2 , , x and , X=(x_(1),x_(2),dots,x_(i),dots)X=\left(x_{1}, x_{2}, \ldots, x_{i}, \ldots\right)X=(x1,x2,,xand,)
Using Newton's modified method
X k + 1 = X k [ F ( X 0 ) ] 1 F ( X k ) , ( k = 0 , 1 , 2 , ) X k + 1 = X k F X 0 1 F X k , ( k = 0 , 1 , 2 , ) X_(k+1)=X_(k)-[F^(')(X_(0))]^(-1)F(X_(k)),quad(k=0,1,2,dots)X_{k+1}=X_{k}-\left[F^{\prime}\left(X_{0}\right)\right]^{-1} F\left(X_{k}\right), \quad(k=0,1,2, \ldots)Xk+1=Xk[F(X0)]1F(Xk),(k=0,1,2,)
we can obtain explicit formulas for the approximate solution of infinite systems having the form
(4.1) x and ( k + 1 ) = x and ( k ) F and ( x and ( k ) ; A and ( k ) , C and ( 0 ) ) ( F and x and ) 0 ( and = 1 , 2 , ; k = 0 , 1 , 2 , ) , (4.1) x and ( k + 1 ) = x and ( k ) F and x and ( k ) ; A and ( k ) , C and ( 0 ) F and x and 0 ( and = 1 , 2 , ; k = 0 , 1 , 2 , ) , {:(4.1)x_(i)^((k+1))=x_(i)^((k))-(F_(i)(x_(i)^((k));A_(i)^((k)),C_(i)^((0))))/(((delF_(i))/(delx_(i)))_(0))(i=1","2","dots;k=0","1","2","dots)",":}\begin{equation*} x_{i}^{(k+1)}=x_{i}^{(k)}-\frac{F_{i}\left(x_{i}^{(k)} ; A_{i}^{(k)}, C_{i}^{(0)}\right)}{\left(\frac{\partial F_{i}}{\partial x_{i}}\right)_{0}}(i=1,2, \ldots ; k=0,1,2, \ldots), \tag{4.1} \end{equation*}(4.1)xand(k+1)=xand(k)Fand(xand(k);Aand(k),Cand(0))(Fandxand)0(and=1,2,;k=0,1,2,),
where was it noted X k = ( x 1 ( k ) , x 2 ( k ) , , x i ( k ) , ) , A i ( k ) = j = 1 a i j x j ( k ) b i X k = x 1 ( k ) , x 2 ( k ) , , x i ( k ) , , A i ( k ) = j = 1 a i j x j ( k ) b i X_(k)=(x_(1)^((k)),x_(2)^((k)),dots,x_(i)^((k)),dots),A_(i)^((k))=sum_(j=1)^(oo)a_(ij)x_(j)^((k))-b_(i)X_{k}=\left(x_{1}^{(k)}, x_{2}^{(k)}, \ldots, x_{i}^{(k)}, \ldots\right), A_{i}^{(k)}=\sum_{j=1}^{\infty} a_{i j} x_{j}^{(k)}-b_{i}Xk=(x1(k),x2(k),,xand(k),),Aand(k)=j=1Aandjxj(k)band
Regarding the convergence of this method, we can use a theorem of Kantorovich [1], which is formulated as follows :
We consider the operation F ( X ) F ( X ) F(X)F(X)F(X)what transforms space X X X\mathbf{X}Xin Y Y Y\mathbf{Y}Y(where X X X\mathbf{X}Xand Y are Banach spaces). Then if the conditions are also satisfied
1 1 1^(@)1^{\circ}1. nonlinear operation F ( X ) F ( X ) F(X)F(X)F(X)is twice differentiable in the Fréchet sense,
2 2 2^(@)2^{\circ}2. for the element X 0 X 0 X_(0)X_{0}X0of the initial approximation, the operator F ( X 0 ) F X 0 F^(')(X_(0))F^{\prime}\left(X_{0}\right)F(X0)has the reverse Γ 0 = [ F ( X 0 ) ] 1 Γ 0 = F X 0 1 Gamma_(0)=[F^(')(X_(0))]^(-1)\Gamma_{0}=\left[F^{\prime}\left(X_{0}\right)\right]^{-1}Γ0=[F(X0)]1and Γ 0 B 0 Γ 0 B 0 ||Gamma_(0)|| <= B_(0)\left\|\Gamma_{0}\right\| \leqslant B_{0}Γ0B0,
3 3 3^(@)3^{\circ}3. the element X 0 X 0 X_(0)X_{0}X0approximately satisfies the system (2.1) and the evaluation is known
Γ 0 F ( X 0 ) η 0 Γ 0 F X 0 η 0 ||Gamma_(0)F(X_(0))|| <= eta_(0)\left\|\Gamma_{0} F\left(X_{0}\right)\right\| \leqslant \eta_{0}Γ0F(X0)η0
4 4 4^(@)4^{\circ}4. second derivative F ( X ) F ( X ) F^('')(X)F^{\prime \prime}(X)F(X)is limited in the field X X 0 2 η 0 X X 0 2 η 0 ||X-X_(0)|| <= 2eta_(0)\left\|X-X_{0}\right\| \leqslant 2 \eta_{0}XX02η0, thus
F ( X ) K F ( X ) K ||F^('')(X)|| <= K\left\|F^{\prime \prime}(X)\right\| \leqslant KF(X)K
5 5 5^(@)5^{\circ}5. constants B 0 , η 0 , K B 0 , η 0 , K B_(0),eta_(0),KB_{0}, \eta_{0}, KB0,η0,Ksatisfy the inequality
h 0 = B 0 η 0 K < 1 2 , h 0 = B 0 η 0 K < 1 2 , h_(0)=B_(0)eta_(0)K < (1)/(2),h_{0}=B_{0} \eta_{0} K<\frac{1}{2},h0=B0η0K<12,
then the system (1.1) admits a solution X X X^(**)X^{*}Xand successive approximations X k X k X_(k)X_{k}Xkdefined by formulas (4.1) converge to X X X^(**)X^{*}X, and the convergence is of the order of convergence of a geometric series,
X X k q k X X 0 , q = 1 1 2 h 0 < 1 . X X k q k X X 0 , q = 1 1 2 h 0 < 1 . ||X^(**)-X_(k)|| <= q^(k)||X^(**)-X_(0)||,quad q=1-sqrt(1-2h_(0)) < 1.\left\|X^{*}-X_{k}\right\| \leqslant q^{k}\left\|X^{*}-X_{0}\right\|, \quad q=1-\sqrt{1-2 h_{0}}<1 .XXkqkXX0,q=112h0<1.
Observation. I. Hypothesis F ( x ) K F ( x ) K ||F^('')(x)|| <= K\left\|F^{\prime \prime}(x)\right\| \leqslant KF(x)Kimposes the condition that the given linear system must also be bounded, for example in the sense
sup i { 2 | a i i | | d Φ i d A i | j = 1 | a i j | + | d 2 Φ i d A i 2 | ( a i i x i + A ) + | d Φ i d A i | j , k = 1 | a i j a i k | } K sup i 2 a i i d Φ i d A i j = 1 a i j + d 2 Φ i d A i 2 a i i x i + A + d Φ i d A i j , k = 1 a i j a i k K s u p_(i){2|a_(ii)||(dPhi_(i))/(dA_(i))|sum_(j=1)^(oo)|a_(ij)|+|(d^(2)Phi_(i))/(dA_(i)^(2))|(a_(ii)x_(i)+A)+|(dPhi_(i))/(dA_(i))|sum_(j,k=1)^(oo)|a_(ij)a_(ik)|} <= K\sup _{i}\left\{2\left|a_{i i}\right|\left|\frac{d \Phi_{i}}{d A_{i}}\right| \sum_{j=1}^{\infty}\left|a_{i j}\right|+\left|\frac{d^{2} \Phi_{i}}{d A_{i}^{2}}\right|\left(a_{i i} x_{i}+A\right)+\left|\frac{d \Phi_{i}}{d A_{i}}\right| \sum_{j, k=1}^{\infty}\left|a_{i j} a_{i k}\right|\right\} \leqslant Ksoupand{2|Aandand||dΦanddAand|j=1|Aandj|+|d2ΦanddAand2|(Aandandxand+A)+|dΦanddAand|j,k=1|AandjAandk|}K, if x = sup | x i | x = sup x i ||x||=s u p|x_(i)|\|x\|=\sup \left|x_{i}\right|x=soup|xand|.
II. We note that instead of the system (2. 1) we can consider the system
P i ( x i ; A i , C i ( 0 ) ) φ i ( x i ) Φ i ( A i , C i ( 0 ) ) ( a i i x i + A i ) = 0 ( i = 1 , 2 , ) P i x i ; A i , C i ( 0 ) φ i x i Φ i A i , C i ( 0 ) a i i x i + A i = 0 ( i = 1 , 2 , ) P_(i)(x_(i);A_(i),C_(i)^((0)))-=varphi_(i)(x_(i))Phi_(i)(A_(i),C_(i)^((0)))(a_(ii)x_(i)+A_(i))=0quad(i=1,2,dots)P_{i}\left(x_{i} ; A_{i}, C_{i}^{(0)}\right) \equiv \varphi_{i}\left(x_{i}\right) \Phi_{i}\left(A_{i}, C_{i}^{(0)}\right)\left(a_{i i} x_{i}+A_{i}\right)=0 \quad(i=1,2, \ldots)Pand(xand;Aand,Cand(0))φand(xand)Φand(Aand,Cand(0))(Aandandxand+Aand)=0(and=1,2,)
φ i φ i varphi_(i)\varphi_{i}φandbeing a bounded function of simple form, having no real roots. The functions φ i φ i varphi_(i)\varphi_{i}φandcan contribute to improving convergence.
III. Regarding the structure of the approximation formulas (4. 1), it is observed that 1a the determination of the components x i ( k + 1 ) , ( i = 1 , 2 , ) x i ( k + 1 ) , ( i = 1 , 2 , ) x_(i)^((k+1)),(i=1,2,dots)x_{i}^{(k+1)},(i=1,2, \ldots)xand(k+1),(and=1,2,), the components were used x i ( k ) , ( i = 1 , 2 , ) x i ( k ) , ( i = 1 , 2 , ) x_(i)^((k)),(i=1,2,dots)x_{i}^{(k)},(i=1,2, \ldots)xand(k),(and=1,2,)Here we can indicate a modified procedure using the formulas
( ) x i ( k + 1 ) = x i ( k ) φ i ( x ( k ) ) Φ ( A i ( k , k + 1 ) , C i ( 0 ) ) ( a i i x i ( k ) + A i ( k , k + 1 ) ) ( P i x i ) 0 ( i = 1 , 2 , 3 , ) , ( ) x i ( k + 1 ) = x i ( k ) φ i x ( k ) Φ A i ( k , k + 1 ) , C i ( 0 ) a i i x i ( k ) + A i ( k , k + 1 ) P i x i 0 ( i = 1 , 2 , 3 , ) , {:('")"x_(i)^((k+1))=x_(i)^((k))-(varphi_(i)(x^((k)))Phi(A_(i)^((k,k+1)),C_(i)^((0)))(a_(ii)x_(i)^((k))+A_(i)^((k,k+1))))/(((delP_(i))/(delx_(i)))_(0))(i=1","2","3","dots)",":}\begin{equation*} x_{i}^{(k+1)}=x_{i}^{(k)}-\frac{\varphi_{i}\left(x^{(k)}\right) \Phi\left(A_{i}^{(k, k+1)}, C_{i}^{(0)}\right)\left(a_{i i} x_{i}^{(k)}+A_{i}^{(k, k+1)}\right)}{\left(\frac{\partial P_{i}}{\partial x_{i}}\right)_{0}}(i=1,2,3, \ldots), \tag{$\prime$} \end{equation*}()xand(k+1)=xand(k)φand(x(k))Φ(Aand(k,k+1),Cand(0))(Aandandxand(k)+Aand(k,k+1))(Pandxand)0(and=1,2,3,),
where was it noted
A i ( k , k + 1 ) = j = 1 i 1 a i j x j ( k + 1 ) + j = i + 1 a i j x j ( k ) . A i ( k , k + 1 ) = j = 1 i 1 a i j x j ( k + 1 ) + j = i + 1 a i j x j ( k ) . A_(i)^((k,k+1))=sum_(j=1)^(i-1)a_(ij)x_(j)^((k+1))+sum_(j=i+1)^(oo)a_(ij)x_(j)^((k)).A_{i}^{(k, k+1)}=\sum_{j=1}^{i-1} a_{i j} x_{j}^{(k+1)}+\sum_{j=i+1}^{\infty} a_{i j} x_{j}^{(k)} .Aand(k,k+1)=j=1and1Aandjxj(k+1)+j=and+1Aandjxj(k).
Thus here, when calculating the components x i ( k + 1 ) s x i ( k + 1 ) s x_(i)^((k+1))sx_{i}^{(k+1)} \mathrm{s}xand(k+1)S-they used approximations
x 1 ( k + 1 ) , x 2 ( k + 1 ) , , x i 1 ( k + 1 ) , x i ( k ) , , x n ( k ) , x 1 ( k + 1 ) , x 2 ( k + 1 ) , , x i 1 ( k + 1 ) , x i ( k ) , , x n ( k ) , x_(1)^((k+1)),x_(2)^((k+1)),dots,x_(i-1)^((k+1)),x_(i)^((k)),dots,x_(n)^((k)),dotsx_{1}^{(k+1)}, x_{2}^{(k+1)}, \ldots, x_{i-1}^{(k+1)}, x_{i}^{(k)}, \ldots, x_{n}^{(k)}, \ldotsx1(k+1),x2(k+1),,xand1(k+1),xand(k),,xn(k),
somewhat analogous to Seidel's method.
IV. Regarding the uniqueness of the solution, we can also use another theorem of Kantorovich:
If the conditions of the previous theorem are met with the difference that the inequality
F ( X ) K F ( X ) K ||F^('')(X)|| <= K\left\|F^{\prime \prime}(X)\right\| \leqslant KF(X)K
is satisfied in the domain defined by the inequality
(5.1) X X 0 1 + 1 2 h 0 h 0 η 0 (5.1) X X 0 1 + 1 2 h 0 h 0 η 0 {:(5.1)||X-X_(0)|| <= (1+sqrt(1-2h_(0)))/(h_(0))eta_(0):}\begin{equation*} \left\|X-X_{0}\right\| \leqslant \frac{1+\sqrt{1-2 h_{0}}}{h_{0}} \eta_{0} \tag{5.1} \end{equation*}(5.1)XX01+12h0h0η0
then the solution of the system ( 1 .1 1 .1 1^(').11^{\prime} .11.1) is unique in the domain (5.1).
V. In practical applications the convergence conditions in Cantorovich's theorem for solving systems of equations appear a little complicated. However, if for technical reasons we can figure out the domain D in which the solution must be sought, then we can use the following condition
E [ F ( X 0 ) ] 1 F ( X ) < 1 , X D , E F X 0 1 F ( X ) < 1 , X D , ||E-[F^(')(X_(0))]^(-1)F^(')(X)|| < 1,quad X inD,\left\|E-\left[F^{\prime}\left(X_{0}\right)\right]^{-1} F^{\prime}(X)\right\|<1, \quad X \in \mathscr{D},It is[F(X0)]1F(X)<1,XD,
instead of the conditions 3 5 , E 3 5 , E 3^(@)-5^(@),E3^{\circ}-5^{\circ}, E35,It isbeing the unit matrix.
VI. The approximation formulas constructed in this paragraph can be applied not only to the linear system (1'. 1), but also to the case of infinite nonlinear systems of the form
g i ( x i ) + f i ( A i ) = 0 , ( i = 1 , 2 , ) g i x i + f i A i = 0 , ( i = 1 , 2 , ) g_(i)(x_(i))+f_(i)(A_(i))=0,quad(i=1,2,dots)g_{i}\left(x_{i}\right)+f_{i}\left(A_{i}\right)=0, \quad(i=1,2, \ldots)gand(xand)+fand(Aand)=0,(and=1,2,)
where g i ( x i ) g i x i g_(i)(x_(i))g_{i}\left(x_{i}\right)gand(xand)and f i ( A i ) f i A i f_(i)(A_(i))f_{i}\left(A_{i}\right)fand(Aand)are functions of a real variable admitting derivatives of order 2, and which satisfy the conditions of the previous theorem.

§ 2. Other methods of interaction

We consider the system (1.1) in the following form
a 11 x 1 + B 1 = 0 a 21 x 1 + a 22 x 2 + B 2 = 0 . a i 1 x 1 + a i 2 x 2 + + a i i x i + B i = 0 . a 11 x 1 + B 1 = 0 a 21 x 1 + a 22 x 2 + B 2 = 0 . a i 1 x 1 + a i 2 x 2 + + a i i x i + B i = 0 . {:[a_(11)x_(1)+B_(1)=0],[a_(21)x_(1)+a_(22)x_(2)+B_(2)=0],[dots dots dots dots dots dots.],[a_(i1)x_(1)+a_(i2)x_(2)+dots+a_(ii)x_(i)+B_(i)=0],[dots dots dots dots dots dots dots dots dots dots dots dots dots dots.]:}\begin{array}{r} a_{11} x_{1}+B_{1}=0 \\ a_{21} x_{1}+a_{22} x_{2}+B_{2}=0 \\ \ldots \ldots \ldots \ldots \ldots \ldots . \\ a_{i 1} x_{1}+a_{i 2} x_{2}+\ldots+a_{i i} x_{i}+B_{i}=0 \\ \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots . \end{array}A11x1+B1=0A21x1+A22x2+B2=0.Aand1x1+Aand2x2++Aandandxand+Band=0.
where B i = j = i + 1 a i j x j b i , ( i = 1 , 2 , ) B i = j = i + 1 a i j x j b i , ( i = 1 , 2 , ) B_(i)=sum_(j=i+1)^(oo)a_(ij)x_(j)-b_(i),quad(i=1,2,dots)B_{i}=\sum_{j=i+1}^{\infty} a_{i j} x_{j}-b_{i}, \quad(i=1,2, \ldots)Band=j=and+1Aandjxjband,(and=1,2,)We then construct the equivalent nonlinear system cu ( 1.1 ) cu ( 1.1 ) cu(1.1)\mathrm{cu}(1.1)with(1.1),
f 1 ( x 1 ; B 1 , k 1 0 ) ψ 1 ( B 1 ; k 1 0 ) ( a 11 x 1 + B 1 ) = 0 f 2 ( x 1 , x 2 ; B 2 , k 2 0 ) ψ 2 ( B 2 ; k 2 0 ) ( a 21 x 1 + a 22 x 2 + B 2 ) = 0 (1.2) f i ( x 1 , x 2 , , x i ; B i , k i 0 ) ψ i ( B i ; k i 0 ) ( a i 1 x 1 + + a i i x i + B i ) = 0 . . . . . . . . . . . . . . . . . . . . . . f 1 x 1 ; B 1 , k 1 0 ψ 1 B 1 ; k 1 0 a 11 x 1 + B 1 = 0 f 2 x 1 , x 2 ; B 2 , k 2 0 ψ 2 B 2 ; k 2 0 a 21 x 1 + a 22 x 2 + B 2 = 0 (1.2) f i x 1 , x 2 , , x i ; B i , k i 0 ψ i B i ; k i 0 a i 1 x 1 + + a i i x i + B i = 0 . . . . . . . . . . . . . . . . . . . . . . {:[f_(1)(x_(1);B_(1),k_(1)^(0))-=psi_(1)(B_(1);k_(1)^(0))(a_(11)x_(1)+B_(1))=0],[f_(2)(x_(1),x_(2);B_(2),k_(2)^(0))-=psi_(2)(B_(2);k_(2)^(0))(a_(21)x_(1)+a_(22)x_(2)+B_(2))=0],[(1.2)f_(i)(x_(1),x_(2),dots,x_(i);B_(i),k_(i)^(0))-=psi_(i)(B_(i);k_(i)^(0))(a_(i1)x_(1)+dots+a_(ii)x_(i)+B_(i))=0],[dots......................]:}\begin{gather*} f_{1}\left(x_{1} ; B_{1}, k_{1}^{0}\right) \equiv \psi_{1}\left(B_{1} ; k_{1}^{0}\right)\left(a_{11} x_{1}+B_{1}\right)=0 \\ f_{2}\left(x_{1}, x_{2} ; B_{2}, k_{2}^{0}\right) \equiv \psi_{2}\left(B_{2} ; k_{2}^{0}\right)\left(a_{21} x_{1}+a_{22} x_{2}+B_{2}\right)=0 \\ f_{i}\left(x_{1}, x_{2}, \ldots, x_{i} ; B_{i}, k_{i}^{0}\right) \equiv \psi_{i}\left(B_{i} ; k_{i}^{0}\right)\left(a_{i 1} x_{1}+\ldots+a_{i i} x_{i}+B_{i}\right)=0 \tag{1.2}\\ \ldots . . . . . . . . . . . . . . . . . . . . . . \end{gather*}f1(x1;B1,k10)ψ1(B1;k10)(A11x1+B1)=0f2(x1,x2;B2,k20)ψ2(B2;k20)(A21x1+A22x2+B2)=0(1.2)fand(x1,x2,,xand;Band,kand0)ψand(Band;kand0)(Aand1x1++Aandandxand+Band)=0......................
For functions ψ i ψ i psi_(i)\psi_{i}ψandwe have exactly the same conditions as in the case of functions Φ i Φ i Phi_(i)\Phi_{i}Φandfrom § 1. Regarding the constants k i 0 k i 0 k_(i)^(0)k_{i}^{0}kand0, they are determined in such a way that for the initial approximation X 0 = ( x 1 0 , x 2 0 , , x i 0 , ) X 0 = x 1 0 , x 2 0 , , x i 0 , X_(0)=(x_(1)^(0),x_(2)^(0),dots,x_(i)^(0),dots)X_{0}=\left(x_{1}^{0}, x_{2}^{0}, \ldots, x_{i}^{0}, \ldots\right)X0=(x10,x20,,xand0,)
to have
( f i x j ) 0 = 0 pentru j = i + 1 , i + 2 , f i x j 0 = 0  pentru  j = i + 1 , i + 2 , ((delf_(i))/(delx_(j)))_(0)=0quad" pentru "j=i+1,i+2,dots\left(\frac{\partial f_{i}}{\partial x_{j}}\right)_{0}=0 \quad \text { pentru } j=i+1, i+2, \ldots(fandxj)0=0 for j=and+1,and+2,
which is equivalent to
( f i B i ) 0 = 0 , ( i = 1 , 2 , ) f i B i 0 = 0 , ( i = 1 , 2 , ) ((delf_(i))/(delB_(i)))_(0)=0,quad(i=1,2,dots)\left(\frac{\partial f_{i}}{\partial B_{i}}\right)_{0}=0, \quad(i=1,2, \ldots)(fandBand)0=0,(and=1,2,)
We construct the nonlinear operator
f ( X ) = ( f 1 ( x 1 ; B 1 , k 1 0 ) f i ( x 1 , x 2 , , x i ; B i , k i 0 ) ) f ( X ) = f 1 x 1 ; B 1 , k 1 0 f i x 1 , x 2 , , x i ; B i , k i 0 f(X)=([f_(1)(x_(1);B_(1),k_(1)^(0))],[vdots],[f_(i)(x_(1),x_(2),dots,x_(i);B_(i),k_(i)^(0))],[vdots])f(X)=\left(\begin{array}{c} f_{1}\left(x_{1} ; B_{1}, k_{1}^{0}\right) \\ \vdots \\ f_{i}\left(x_{1}, x_{2}, \ldots, x_{i} ; B_{i}, k_{i}^{0}\right) \\ \vdots \end{array}\right)f(X)=(f1(x1;B1,k10)fand(x1,x2,,xand;Band,kand0))
whose derivative 2 2 ^(2){ }^{2}2) in the sense of Fréchet at the point X 0 X 0 X_(0)X_{0}X0is of the form
f ( X 0 ) = ( ( f 1 x 1 ) 0 0 0 ( f 2 x 1 ) 0 ( f 2 x 2 ) 0 0 ( f i x 1 ) 0 ( f i x 2 ) 0 ( f i x i ) 0 ) . f X 0 = f 1 x 1 0 0 0 f 2 x 1 0 f 2 x 2 0 0 f i x 1 0 f i x 2 0 f i x i 0 . f^(')(X_(0))=([((delf_(1))/(delx_(1)))_(0),0,dots,0,dots],[((delf_(2))/(delx_(1)))_(0),((delf_(2))/(delx_(2)))_(0),dots,0,dots],[dots,dots,dots,dots,dots,dots],[((delf_(i))/(delx_(1)))_(0),((delf_(i))/(delx_(2)))_(0),dots,((delf_(i))/(delx_(i)))_(0),dots],[dots,dots,dots,dots,dots,dots],[dots,dots,dots]).f^{\prime}\left(X_{0}\right)=\left(\begin{array}{cccccc} \left(\frac{\partial f_{1}}{\partial x_{1}}\right)_{0} & 0 & \ldots & 0 & \ldots \\ \left(\frac{\partial f_{2}}{\partial x_{1}}\right)_{0} & \left(\frac{\partial f_{2}}{\partial x_{2}}\right)_{0} & \ldots & 0 & \ldots \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ \left(\frac{\partial f_{i}}{\partial x_{1}}\right)_{0} & \left(\frac{\partial f_{i}}{\partial x_{2}}\right)_{0} & \ldots & \left(\frac{\partial f_{i}}{\partial x_{i}}\right)_{0} & \ldots \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ \ldots & \ldots & \ldots \end{array}\right) .f(X0)=((f1x1)000(f2x1)0(f2x2)00(fandx1)0(fandx2)0(fandxand)0).
Using Newton's modified method we obtain the following approximation formulas, which apply in the form
(2.2) A ~ X k + 1 = A ~ k Δ 1 f ( X k ) (2.2) A ~ X k + 1 = A ~ k Δ 1 f X k {:(2.2) tilde(A)X_(k+1)= tilde(A)_(k)-Delta^(-1)f(X_(k)):}\begin{equation*} \tilde{A} X_{k+1}=\tilde{A}_{k}-\Delta^{-1} f\left(X_{k}\right) \tag{2.2} \end{equation*}(2.2)A~Xk+1=A~kΔ1f(Xk)
where
A ~ = ( a 11 0 0 0 a 21 a 22 0 0 a 31 a 32 a 33 0 . . . . . . a i 1 a i 2 a i 3 . a i i . . . c . A ~ = a 11 0 0 0 a 21 a 22 0 0 a 31 a 32 a 33 0 . . . . . . a i 1 a i 2 a i 3 . a i i . . . c . tilde(A)=([a_(11),0,0,dots,0,dots],[a_(21),a_(22),0,dots,0,dots],[a_(31),a_(32),a_(33),dots,0,dots],[.,.,.,.,.,.],[a_(i1),a_(i2),a_(i3),dots,.,a_(ii)]...c^(.):}\tilde{A}=\left(\begin{array}{cccccc} a_{11} & 0 & 0 & \ldots & 0 & \ldots \\ a_{21} & a_{22} & 0 & \ldots & 0 & \ldots \\ a_{31} & a_{32} & a_{33} & \ldots & 0 & \ldots \\ . & . & . & . & . & . \\ a_{i 1} & a_{i 2} & a_{i 3} & \ldots & . & a_{i i} \end{array} . . . c^{.}\right.A~=(A11000A21A2200A31A32A330......Aand1Aand2Aand3.Aandand...c.
and Δ = ( δ i i ) , δ i i = ψ i ( B i 0 , k i 0 ) Δ = δ i i , δ i i = ψ i B i 0 , k i 0 Delta=(delta_(ii)),delta_(ii)=psi_(i)(B_(i)^(0),k_(i)^(0))\Delta=\left(\delta_{i i}\right), \delta_{i i}=\psi_{i}\left(B_{i}^{0}, k_{i}^{0}\right)Δ=(δandand),δandand=ψand(Band0,kand0). Calculating the approximation X k + 1 X k + 1 X_(k+1)X_{k+1}Xk+1it is done the same as in Seidel's method.
Observations. I. Instead of system (1.2) we can consider the system:
p ( i ) ( x 1 , x 2 , , x i ; B i , k k 0 ) Ψ i ( x i ) Ψ i ( B i , k i 0 ) ( a i 1 x 1 + + a i i x i + B i ) = 0 p ( i ) x 1 , x 2 , , x i ; B i , k k 0 Ψ i x i Ψ i B i , k i 0 a i 1 x 1 + + a i i x i + B i = 0 p^((i))(x_(1),x_(2),dots,x_(i);B_(i),k_(k)^(0))-=Psi_(i)(x_(i))Psi_(i)(B_(i),k_(i)^(0))(a_(i1)x_(1)+dots+a_(ii)x_(i)+B_(i))=0p^{(i)}\left(x_{1}, x_{2}, \ldots, x_{i} ; B_{i}, k_{k}^{0}\right) \equiv \Psi_{i}\left(x_{i}\right) \Psi_{i}\left(B_{i}, k_{i}^{0}\right)\left(a_{i 1} x_{1}+\ldots+a_{i i} x_{i}+B_{i}\right)=0p(and)(x1,x2,,xand;Band,kk0)Ψand(xand)Ψand(Band,kand0)(Aand1x1++Aandandxand+Band)=0,
Ψ i ( x i ) Ψ i x i Psi_(i)(x_(i))\Psi_{i}\left(x_{i}\right)Ψand(xand)being also a bounded function, which is subject to the same conditions as the function φ i φ i varphi_(i)\varphi_{i}φandfrom § 1. In this case the derivative of the operator
P ( X ) ( p ( 1 ) ( x 1 ; B 1 , k 1 0 ) p ( i ) ( x 1 , , x i ; B i , k i 0 ) ) P ( X ) p ( 1 ) x 1 ; B 1 , k 1 0 p ( i ) x 1 , , x i ; B i , k i 0 P(X)-=([p^((1))(x_(1);B_(1),k_(1)^(0))],[vdots],[p^((i))(x_(1),dots,x_(i);B_(i),k_(i)^(0))],[vdots])P(X) \equiv\left(\begin{array}{c} p^{(1)}\left(x_{1} ; B_{1}, k_{1}^{0}\right) \\ \vdots \\ p^{(i)}\left(x_{1}, \ldots, x_{i} ; B_{i}, k_{i}^{0}\right) \\ \vdots \end{array}\right)P(X)(p(1)(x1;B1,k10)p(and)(x1,,xand;Band,kand0))
will also be an infinite triangular matrix of the form
P ( X 0 ) = ( ( p ( 1 ) x 1 ) 0 0 0 ( p ( 2 ) x 1 ) 0 ( p ( 2 ) x 2 ) 0 0 ( p ( i ) x 1 ) 0 ( p ( i ) x 1 ) 0 x i ) . P X 0 = p ( 1 ) x 1 0 0 0 p ( 2 ) x 1 0 p ( 2 ) x 2 0 0 p ( i ) x 1 0 p ( i ) x 1 0 x i . P^(')(X_(0))=([((delp^((1)))/(delx_(1)))_(0),0,dots,0,dots],[((delp^((2)))/(delx_(1)))_(0),((delp^((2)))/(delx_(2)))_(0),dots,,0,dots],[*,*,*,*,*,*],[((delp^((i)))/(delx_(1)))_(0),((delp^((i)))/(delx_(1)))_(0),dots,*,*,*],[*,*,*,*,*,*],[delx_(i),*,*,*,*],[*,*,*,*,*]).P^{\prime}\left(X_{0}\right)=\left(\begin{array}{cccccc} \left(\frac{\partial p^{(1)}}{\partial x_{1}}\right)_{0} & 0 & \ldots & 0 & \ldots \\ \left(\frac{\partial p^{(2)}}{\partial x_{1}}\right)_{0} & \left(\frac{\partial p^{(2)}}{\partial x_{2}}\right)_{0} & \ldots & & 0 & \ldots \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \left(\frac{\partial p^{(i)}}{\partial x_{1}}\right)_{0} & \left(\frac{\partial p^{(i)}}{\partial x_{1}}\right)_{0} & \ldots & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ \partial x_{i} & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot \end{array}\right) .P(X0)=((p(1)x1)000(p(2)x1)0(p(2)x2)00(p(and)x1)0(p(and)x1)0xand).
II. Matrix P ( X 0 ) P X 0 P^(')(X_(0))P^{\prime}\left(X_{0}\right)P(X0)we can give it a simpler form. This is easily done if we write system (1.1) as follows
a i , i 1 x i 1 + a i i x i + β i = 0 , ( i = 1 , 2 , ) a i , i 1 x i 1 + a i i x i + β i = 0 , ( i = 1 , 2 , ) a_(i,i-1)x_(i-1)+a_(ii)x_(i)+beta_(i)=0,quad(i=1,2,dots)a_{i, i-1} x_{i-1}+a_{i i} x_{i}+\beta_{i}=0, \quad(i=1,2, \ldots)Aand,and1xand1+Aandandxand+βand=0,(and=1,2,)
and then we build the nonlinear system
p ¯ ( i ) ( x i 1 , x i ; β i ) Ψ ¯ ( x i ) Ψ ( β i ; K i 0 ) ( a i , i 1 x i 1 + a i i x i + β i ) = 0 . p ¯ ( i ) x i 1 , x i ; β i Ψ ¯ x i Ψ β i ; K i 0 a i , i 1 x i 1 + a i i x i + β i = 0 . bar(p)^((i))(x_(i-1),x_(i);beta_(i))-= bar(Psi)(x_(i))Psi(beta_(i);K_(i)^(0))(a_(i,i-1)x_(i-1)+a_(ii)x_(i)+beta_(i))=0.\bar{p}^{(i)}\left(x_{i-1}, x_{i} ; \beta_{i}\right) \equiv \bar{\Psi}\left(x_{i}\right) \Psi\left(\beta_{i} ; K_{i}^{0}\right)\left(a_{i, i-1} x_{i-1}+a_{i i} x_{i}+\beta_{i}\right)=0 .p¯(and)(xand1,xand;βand)Ψ¯(xand)Ψ(βand;Kand0)(Aand,and1xand1+Aandandxand+βand)=0.
By imposing the condition ( p ¯ ( i ) β i ) 0 = 0 , ( i = 1 , 2 , ) p ¯ ( i ) β i 0 = 0 , ( i = 1 , 2 , ) ((del bar(p)^((i)))/(delbeta_(i)))_(0)=0,quad(i=1,2,dots)\left(\frac{\partial \bar{p}^{(i)}}{\partial \beta_{i}}\right)_{0}=0, \quad(i=1,2, \ldots)(p¯(and)βand)0=0,(and=1,2,), then as a derivative we obtain a triangular matrix of a particular form
( p ¯ 11 0 0 0 0 p ¯ 21 p ¯ 22 0 0 0 0 p ¯ 32 p ¯ 33 0 0 . . . . . . . . 0 0 0 . . . . 0 . . . . . . . . i p ¯ i i . . c i . c i ) . p ¯ 11 0 0 0 0 p ¯ 21 p ¯ 22 0 0 0 0 p ¯ 32 p ¯ 33 0 0 . . . . . . . . 0 0 0 . . . . 0 . . . . . . . . i p ¯ i i . . c i . c i . ([ bar(p)_(11),0,0,dots,0,0,dots],[ bar(p)_(21), bar(p)_(22),0,dots,0,0,dots],[0, bar(p)_(32), bar(p)_(33),dots,0,,0,dots],[.,.,.,.,.,.,.,.],[0,0,0,dots,.,.,.,.],[0,.,.,.,.,.,.,.]._(i) bar(p)_(ii).dots.c_(i).c_(i)).\left(\begin{array}{llllllll} \bar{p}_{11} & 0 & 0 & \ldots & 0 & 0 & \ldots \\ \bar{p}_{21} & \bar{p}_{22} & 0 & \ldots & 0 & 0 & \ldots \\ 0 & \bar{p}_{32} & \bar{p}_{33} & \ldots & 0 & & 0 & \ldots \\ . & . & . & . & . & . & . & . \\ 0 & 0 & 0 & \ldots & . & . & . & . \\ 0 & . & . & . & . & . & . & . \end{array} ._{i} \bar{p}_{i i} . \ldots . c_{i} . c_{i}\right) .(p¯110000p¯21p¯220000p¯32p¯3300........000....0........andp¯andand..cand.cand).
Apart from the simple matrices considered here, other types of matrices can be used whose inverses are easy to calculate. Analogously we can construct the matrix P P P^(')P^{\prime}Pin such a way that we have in each line of it at most m m mmmnon-zero elements.

§ 3. The case of Newton's unmodified method

Using functions Φ i ( A i ; C i ( k ) ) Φ i A i ; C i ( k ) Phi_(i)(A_(i);C_(i)^((k)))\Phi_{i}\left(A_{i} ; C_{i}^{(k)}\right)Φand(Aand;Cand(k))we can do - at every approximation x i ( k ) , ( k = 0 , 1 , ) x i ( k ) , ( k = 0 , 1 , ) x_(i)^((k)),(k=0,1,dots)x_{i}^{(k)},(k=0,1, \ldots)xand(k),(k=0,1,)- that the derivative in the Fréchet sense be a diagonal matrix. Applying Newton's unmodified procedure
(1.3) X k + 1 = X k [ F ( X k ) ] 1 F ( X k ) (1.3) X k + 1 = X k F X k 1 F X k {:(1.3)X_(k+1)=X_(k)-[F^(')(X_(k))]^(-1)F(X_(k)):}\begin{equation*} X_{k+1}=X_{k}-\left[F^{\prime}\left(X_{k}\right)\right]^{-1} F\left(X_{k}\right) \tag{1.3} \end{equation*}(1.3)Xk+1=Xk[F(Xk)]1F(Xk)
the following formulas are arrived at
( ) x i ( k + 1 ) = x i ( k ) f k ( i ) ( f ( i ) x i ) k ( i = 1 , 2 , ; k = 0 , 1 , 2 , ) , ( ) x i ( k + 1 ) = x i ( k ) f k ( i ) f ( i ) x i k ( i = 1 , 2 , ; k = 0 , 1 , 2 , ) , {:('")"x_(i)^((k+1))=x_(i)^((k))-(f_(k)^((i)))/(((delf^((i)))/(delx_(i)))_(k))(i=1","2","dots;k=0","1","2","dots)",":}\begin{equation*} x_{i}^{(k+1)}=x_{i}^{(k)}-\frac{f_{k}^{(i)}}{\left(\frac{\partial f^{(i)}}{\partial x_{i}}\right)_{k}}(i=1,2, \ldots ; k=0,1,2, \ldots), \tag{$\prime$} \end{equation*}()xand(k+1)=xand(k)fk(and)(f(and)xand)k(and=1,2,;k=0,1,2,),
where f i ( i ) = φ ( x i ( k ) ) ( a i i x i ( k ) + A i ( k ) ) f i ( i ) = φ x i ( k ) a i i x i ( k ) + A i ( k ) f_(i)^((i))=varphi(x_(i)^((k)))(a_(ii)x_(i)^((k))+A_(i)^((k)))f_{i}^{(i)}=\varphi\left(x_{i}^{(k)}\right)\left(a_{i i} x_{i}^{(k)}+A_{i}^{(k)}\right)fand(and)=φ(xand(k))(Aandandxand(k)+Aand(k))We note, however, that Kantorovich's theorem regarding the method ( 1 .3 1 .3 1^(').31^{\prime} .31.3) cannot be applied, because in general the second-order derivatives of the expressions
φ i ( x i ) Φ i ( A i ; C i ( k ) ) ( a i i x i + A i ) φ i x i Φ i A i ; C i ( k ) a i i x i + A i varphi_(i)(x_(i))Phi_(i)(A_(i);C_(i)^((k)))(a_(ii)x_(i)+A_(i))\varphi_{i}\left(x_{i}\right) \Phi_{i}\left(A_{i} ; C_{i}^{(k)}\right)\left(a_{i i} x_{i}+A_{i}\right)φand(xand)Φand(Aand;Cand(k))(Aandandxand+Aand)
no longer remain limited, for whatever k k kkkWe note
that for φ ( x i ) = 1 φ x i = 1 varphi(x_(i))=1\varphi\left(x_{i}\right)=1φ(xand)=1, we find PollaczekGeiringer's method (the ordinary iteration method).
Finally, using formula (1.3) and proceeding analogously as in § 2, methods similar to (2.2) can be given, except that in this case the triangular matrix F F F^(')F^{\prime}Fwill also depend on k k kkkSeidel's well-known method also belongs to this category.
These methods, being successive approximation methods, have the following structure:
X k + 1 = X k + Δ k X k + 1 = X k + Δ k X_(k+1)=X_(k)+Delta_(k)X_{k+1}=X_{k}+\Delta_{k}Xk+1=Xk+Δk
where X k + 1 , X k X k + 1 , X k X_(k+1),X_(k)X_{k+1}, X_{k}Xk+1,Xkare the approximations of the order k + 1 k + 1 k+1k+1k+1and k k kkk, and Δ k Δ k Delta_(k)\Delta_{k}Δkis the correction of the order k k kkk, which is calculated according to a given algorithm. If these norm corrections increase during the calculations, then we can consider that the procedure is not convergent, because working in a complete space, it is necessary and sufficient for convergence that Δ k = X k + 1 X k 0 Δ k = X k + 1 X k 0 ||Delta_(k)||=||X_(k+1)-X_(k)||rarr0\left\|\Delta_{k}\right\|=\left\|X_{k+1}-X_{k}\right\| \rightarrow 0Δk=Xk+1Xk0However, we cannot say anything about the convergence of the procedure when we find that during the calculations the corrections in the sense of a norm decrease (since this does not yet mean Δ k 0 Δ k 0 ||Delta_(k)||rarr0\left\|\Delta_{k}\right\| \rightarrow 0Δk0). In the present case, we can only say that the approximate solution "approaches" the exact one. However, we know that in numerical applications it is approximated with a finite number of "steps", that is, we can only make a finite number of approximations, aiming for a given precision. Therefore, if during the successive approximations it is found that the corrections decrease in norm and at a given moment the order of precision with which the coefficients and free terms of the system under consideration were given is reached, then such an approximate solution satisfies the practical requirements. Thus, it is illusory to continue the calculations further and therefore we are not interested in the convergence of the procedure.
In the following, we will give conditions for the existence of the system solution and also delimitations for the error evaluation.

§ 4. Delimitation of errors

In § 1 we had the following delimitation
X X k q k 1 X X 1 q k 2 η 0 X X k q k 1 X X 1 q k 2 η 0 ||X^(**)-X_(k)|| <= q^(k-1)||X^(**)-X_(1)|| <= q^(k)2eta_(0)\left\|X^{*}-X_{k}\right\| \leqslant q^{k-1}\left\|X^{*}-X_{1}\right\| \leqslant q^{k} 2 \eta_{0}XXkqk1XX1qk2η0
but we can give delimitations that include simpler expressions for numerical applications.
We consider normed linear spaces [2] X X X\mathbf{X}Xand X X ¯ bar(X)\overline{\mathbf{X}}X, where X X ¯ bar(X)\overline{\mathbf{X}}Xis complete and isomorphic to the subspace X X X X X^(')subX\mathbf{X}^{\prime} \subset \mathbf{X}XXWe then assume that this isomorphism is achieved using a linear operator φ 0 φ 0 varphi_(0)\varphi_{0}φ0which applies X X X^(')\mathbf{X}^{\prime}Xon X X ¯ bar(X)\overline{\mathbf{X}}X, and φ 0 1 φ 0 1 varphi_(0)^(-1)\varphi_{0}^{-1}φ01exists and is continuous. Finally, it is also assumed that φ 0 φ 0 varphi_(0)\varphi_{0}φ0can be extended to the entire space X X X\mathbf{X}X.
We consider the equation "exact"
(1.4) K X X λ H X = Y (1.4) K X X λ H X = Y {:(1.4)KX-=X-lambda HX=Y:}\begin{equation*} K X \equiv X-\lambda H X=Y \tag{1.4} \end{equation*}(1.4)KXXλHX=Y
where X , Y X X , Y X X,Y inXX, Y \in \mathbf{X}X,YX, and H H HHHis an operator that applies to X X X\mathbf{X}Xin itself. For the approximate solution of equation (1.4) we also consider the "approximate" equation
(2.4) K ¯ X ¯ X ¯ λ H ¯ X ¯ = Y ¯ = φ Y . (2.4) K ¯ X ¯ X ¯ λ H ¯ X ¯ = Y ¯ = φ Y . {:(2.4) bar(K) bar(X)-= bar(X)-lambda bar(H) bar(X)= bar(Y)=varphi Y.:}\begin{equation*} \bar{K} \bar{X} \equiv \bar{X}-\lambda \bar{H} \bar{X}=\bar{Y}=\varphi Y . \tag{2.4} \end{equation*}(2.4)K¯X¯X¯λH¯X¯=Y¯=φY.
The condition that equations (1.4) and (2.4) are "close" is expressed as follows:
(I) φ K X K φ X ε | λ | . X , X X . (I) φ K X K φ X ε | λ | . X , X X . {:(I)||varphi KX^(')-K varphiX^(')|| <= epsi|lambda|.||X^(')||","quadX^(')inX^(').:}\begin{equation*} \left\|\varphi K X^{\prime}-K \varphi X^{\prime}\right\| \leqslant \varepsilon|\lambda| .\left\|X^{\prime}\right\|, \quad X^{\prime} \in \mathbf{X}^{\prime} . \tag{I} \end{equation*}(I)φKXKφXε|λ|.X,XX.
It is also assumed that for any element X X X X X inXX \in \mathbf{X}XXthere is an element X X X X X^(')inX^(')X^{\prime} \in \mathbf{X}^{\prime}XXso that
(II) H X X ε 1 X , (II) H X X ε 1 X , {:(II)||HX-X^(')|| <= epsi_(1)||X||",":}\begin{equation*} \left\|H X-X^{\prime}\right\| \leqslant \varepsilon_{1}\|X\|, \tag{II} \end{equation*}(II)HXXε1X,
and for the element Y Y YYYthere is an element Y X Y X Y^(')inX^(')Y^{\prime} \in \mathbf{X}^{\prime}YXso that
(III) Y Y ε 2 Y . (III) Y Y ε 2 Y . {:(III)||Y-Y^(')|| <= epsi_(2)||Y||.:}\begin{equation*} \left\|Y-Y^{\prime}\right\| \leqslant \varepsilon_{2}\|Y\| . \tag{III} \end{equation*}(III)YYε2Y.
In our case we will have
X = X = X X = X = X X=X^(')=X^(larr)\mathbf{X}=\mathbf{X}^{\prime}=\overleftarrow{\mathbf{X}}X=X=X
We now consider the "exact" system
A X = t A X = t AX=tA X=tAX=t
and the "approximately" one
X ¯ = f ¯ X ¯ = f ¯ bar(X)= bar(f)\bar{X}=\bar{f}X¯=f¯
where f ¯ = x 0 C 0 ( A x 0 f ) , C 0 f ¯ = x 0 C 0 A x 0 f , C 0 bar(f)=x_(0)-C_(0)(Ax_(0)-f),C_(0)\bar{f}=x_{0}-C_{0}\left(A x_{0}-f\right), C_{0}f¯=x0C0(Ax0f),C0being a diagonal matrix with non-zero diagonal elements. These elements are calculated according to a certain rule (e.g. according to Newton's method) or are chosen empirically, taking into account the requirements of practical calculations.
solutions X , X ¯ X , X ¯ X, bar(X)X, \bar{X}X,X¯we will look for them in the form X = x + x 0 , X ¯ = x ¯ + x 0 X = x + x 0 , X ¯ = x ¯ + x 0 X=x+x_(0), bar(X)= bar(x)+x_(0)X=x+x_{0}, \bar{X}=\bar{x}+x_{0}X=x+x0,X¯=x¯+x0; x 0 x 0 x_(0)x_{0}x0is an approximate solution, x ¯ x ¯ bar(x)\bar{x}x¯a correction to obtain the approximation X ¯ X ¯ bar(X)\bar{X}X¯and x x xxxis the "exact" correction. Performing these transformations we obtain
A x = ( A x 0 f ) x ¯ = C 0 ( A x 0 f ) A x = A x 0 f x ¯ = C 0 A x 0 f {:[Ax=-(Ax_(0)-f)],[ bar(x)=-C_(0)(Ax_(0)-f)]:}\begin{aligned} A x & =-\left(A x_{0}-f\right) \\ \bar{x} & =-C_{0}\left(A x_{0}-f\right) \end{aligned}Ax=(Ax0f)x¯=C0(Ax0f)
Finally, multiplying the first system by the matrix C 0 C 0 C_(0)C_{0}C0we get the systems
C 0 A x = C 0 ( A x 0 f ) x ¯ = C 0 ( A x 0 f ) . C 0 A x = C 0 A x 0 f x ¯ = C 0 A x 0 f . {:[C_(0)Ax=-C_(0)(Ax_(0)-f)],[ bar(x)=-C_(0)(Ax_(0)-f).]:}\begin{aligned} C_{0} A x & =-C_{0}\left(A x_{0}-f\right) \\ \bar{x} & =-C_{0}\left(A x_{0}-f\right) . \end{aligned}C0Ax=C0(Ax0f)x¯=C0(Ax0f).
Thus in our case we have K = C 0 A K = C 0 A K=C_(0)AK=C_{0} AK=C0Aand K ¯ = E , E K ¯ = E , E bar(K)=E,E\bar{K}=E, EK¯=It is,It isbeing the unit matrix; iar Y = Y ¯ = C 0 ( A x 0 f ) iar Y = Y ¯ = C 0 A x 0 f iar Y= bar(Y)=-C_(0)(Ax_(0)-f)\operatorname{iar} Y=\bar{Y}=-C_{0}\left(A x_{0}-f\right)andY=Y¯=C0(Ax0f)and so φ = φ 0 = E φ = φ 0 = E varphi=varphi_(0)=E\varphi=\varphi_{0}=Eφ=φ0=It isNow condition (I) is presented in the form
C 0 A x E x ε x , C 0 A x E x ε x , ||C_(0)Ax-Ex|| <= epsi||x||,\left\|C_{0} A x-E x\right\| \leqslant \varepsilon\|x\|,C0AxIt isxεx,
and for ε ε epsi\varepsilonεwe can put ε = C 0 A E ε = C 0 A E epsi=||C_(0)A-E||\varepsilon=\left\|C_{0} A-E\right\|ε=C0AIt is. As for conditions (II) and (III), they are fulfilled because the spaces X X X\mathbf{X}Xand X X X^(')\mathbf{X}^{\prime}Xcoincide. Thus we have ε 1 = ε 2 = 0 ε 1 = ε 2 = 0 epsi_(1)=epsi_(2)=0\varepsilon_{1}=\varepsilon_{2}=0ε1=ε2=0.
Now applying Kantorovich's theorems 2, 5 and 7 [2], we obtain the following result:
If the inequality is satisfied
2 C 0 A E < 1 , 2 C 0 A E < 1 , 2||C_(0)A-E|| < 1,2\left\|C_{0} A-E\right\|<1,2C0AIt is<1,
then there is a solution X X X^(**)X^{*}X, which is unique and we have the evaluation
X X ¯ 2 C 0 A E 1 2 C 0 A E C 0 ( A x 0 f ) X X ¯ 2 C 0 A E 1 2 C 0 A E C 0 A x 0 f ||X^(**)-( bar(X))|| <= (2||C_(0)A-E||)/(1-2||C_(0)A-E||)||C_(0)(Ax_(0)-f)||\left\|X^{*}-\bar{X}\right\| \leqslant \frac{2\left\|C_{0} A-E\right\|}{1-2\left\|C_{0} A-E\right\|}\left\|C_{0}\left(A x_{0}-f\right)\right\|XX¯2C0AIt is12C0AIt isC0(Ax0f)
Observation. If instead of the equation X ¯ = f ¯ X ¯ = f ¯ bar(X)= bar(f)\bar{X}=\bar{f}X¯=f¯we have the approximate equation
A ~ X ¯ = f , A ~ X ¯ = f , tilde(A) bar(X)=f^('),\tilde{A} \bar{X}=f^{\prime},A~X¯=f,
where f = A ~ x 0 Δ 1 f ( x 0 ) f = A ~ x 0 Δ 1 f x 0 f^(')= widetilde(A)x_(0)-Delta^(-1)f(x_(0))f^{\prime}=\widetilde{A} x_{0}-\Delta^{-1} f\left(x_{0}\right)f=A~x0Δ1f(x0)(see formula (2.2)), then we can proceed as in the previous case. The expression Δ 1 f ( x 0 ) Δ 1 f x 0 Delta^(-1)f(x_(0))\Delta^{-1} f\left(x_{0}\right)Δ1f(x0)can be put in the form γ 0 ( A x 0 f ) γ 0 A x 0 f gamma_(0)(Ax_(0)-f)\gamma_{0}\left(A x_{0}-f\right)γ0(Ax0f)and in this case we will have K = γ 0 A , K ¯ = A ~ , ε = γ 0 A A ~ K = γ 0 A , K ¯ = A ~ , ε = γ 0 A A ~ K=gamma_(0)A, bar(K)= tilde(A),epsi=||gamma_(0)A-( tilde(A))||K=\gamma_{0} A, \bar{K}=\tilde{A}, \varepsilon=\left\|\gamma_{0} A-\tilde{A}\right\|K=γ0A,K¯=A~,ε=γ0AA~
Finally, we arrive at the following delimitation
X X 2 γ 0 A A ~ A ~ 1 1 2 γ 0 A A ~ A ~ 1 γ 0 ( A x 0 f ) . X X 2 γ 0 A A ~ A ~ 1 1 2 γ 0 A A ~ A ~ 1 γ 0 A x 0 f . ||X^(**)-X|| <= (2||gamma_(0)A-( widetilde(A))|||| widetilde(A)^(-1)||)/(1-2||gamma_(0)A-( widetilde(A))|||| widetilde(A)^(-1)||)||gamma_(0)(Ax_(0)-f)||.\left\|X^{*}-X\right\| \leqslant \frac{2\left\|\gamma_{0} A-\widetilde{A}\right\|\left\|\widetilde{A}^{-1}\right\|}{1-2\left\|\gamma_{0} A-\widetilde{A}\right\|\left\|\widetilde{A}^{-1}\right\|}\left\|\gamma_{0}\left(A x_{0}-f\right)\right\| .XX2γ0AA~A~112γ0AA~A~1γ0(Ax0f).
These delimitations can also be used in the case of the methods discussed in § 3, putting X = X k X = X k X=X_(k)X=X_{k}X=Xk. (for a k k kkkfixed).

ABOUT APPROXIMATE SOLUTIONS OF INFINITE SYSTEMS OF LINEAR EQUATIONS

(Brief content)

In this work, by generalizing the known methods of Pollachek and Geiringer [3], new methods of numerical solutions of infinite systems of linear equations are given.
The infinite system of linear equations is transformed into an equivalent nonlinear system, selected in such a way that when applying Newton's generalized method (Kantorovich's generalization), the derivative operation F F FFFбыл бы матрицей простой формы, нахом диагоналонной треугольной и т. d.

SUR LA RÉSOLUTION APPROXIMATIVE DES SYSTEMES D'ÉQUATIONS LINÉAIRES INFINIES

(Summary)

In this work, the author gives new methods for the numerical resolution of infinite linear systems, generalizing the methods known by Pollaczek-Geiringer and Seidel [3].
Here the linear system has been transformed into an equivalent non-linear system, chosen conveniently in the following manner: applying the Newton method generalized by Kantorovitch - to the derivative of the operation F F FFFis a matrix of simple shapes, e.g. of diagonal, triangular shapes, etc.

BIBLIOGRAPHY

  1. L. В. Kantorovich, Newton's method. Труды Мат. Inst. and V. A. Steklova, XXVIII, 127 (1949).
    • Functional analysis and applied mathematics. Good luck. Nauk, III, 6, pp. 107, 111 (1948).
  2. B. Jankó, Newton's method and the approximate solution of systems of linear algebraic equations. Studii si Cerc. de Mat. (Cluj), VIII, 103 (1957).

  1. 1 ) 1 ) ^(1)){ }^{1)}1)By this we mean that the values ​​of the function Φ i Φ i Phi_(i)\Phi_{i}Φandcan be calculated through arithmetic operations (or that we have the necessary tables for the values ​​of this function at hand), e.g.
    [ 1 + C i ( 0 ) ( A i A i ( 0 ) ) | 2 + 1 , e C i ( 0 ) A i etc., unde A i ( 0 ) = j = 1 a i j x j 0 b i , ( i = 1 , 2 , ) . 1 + C i ( 0 ) A i A i ( 0 ) 2 + 1 , e C i ( 0 ) A i  etc., unde  A i ( 0 ) = j = 1 a i j x j 0 b i , ( i = 1 , 2 , ) . [1+C_(i)^((0))(A_(i)-A_(i)^((0)))|^(2)+1,quade^(C_(i)^((0))A_(i))" etc., unde "A_(i)^((0))=sum_(j=1)^(oo)a_(ij)x_(j)^(0)-b_(i),(i=1,2,dots).:}\left[1+\left.C_{i}^{(0)}\left(A_{i}-A_{i}^{(0)}\right)\right|^{2}+1, \quad e^{C_{i}^{(0)} A_{i}} \text { etc., unde } A_{i}^{(0)}=\sum_{j=1}^{\infty} a_{i j} x_{j}^{0}-b_{i},(i=1,2, \ldots) .\right.[1+Cand(0)(AandAand(0))|2+1,it isCand(0)Aand etc., where Aand(0)=j=1Aandjxj0band,(and=1,2,).
  2. 2 ) 2 ) ^(2)){ }^{2)}2)In connection with the existence of the derivative, it is assumed that the amounts j = 1 i | f i x j | j = 1 i f i x j sum_(j=1)^(i)|(delf_(i))/(delx_(j))|\sum_{j=1}^{i}\left|\frac{\partial f_{i}}{\partial x_{j}}\right|j=1and|fandxj|and j , l = 1 | 2 f i x i x l | j , l = 1 2 f i x i x l sum_(j,l=1)^(oo)|(del^(2)f_(i))/(delx_(i)delx_(l))|\sum_{j, l=1}^{\infty}\left|\frac{\partial^{2} f_{i}}{\partial x_{i} \partial x_{l}}\right|j,it=1|2fandxandxit|are limited for anything i i iiand.
1959

Related Posts