Abstract
Inexact Newton methods are variant of the Newton method in which each step satisfies only approximately the linear system [1]. The local convergence theory given by the authors of [1] and most of the results based on it consider the error terms as being provided only by the fact that the linear systems are not solved exactly. The few existing results for the general case (when some perturbed linear systems are considered, which in turn are not solved exactly) do not offer explicit formulas in terms of the perturbations and residuals.
We extend this local convergence theory to the general case, characterizing the rate of convergence (the q-convergence orders) in terms of the perturbations and residuals.
The Newton iterations are then analyzed when, at each step, an approximate solution of the linear system is determined by the following Krylov solvers based on backward error minimization properties: GMRES, GMBACK, MINPERT. We obtain results concerning the following topics: monotone properties of the errors in these Newton-Krylov iterates when the initial guess is taken 0 in the Krylov algorithms; control of the convergence orders of the Newton-Krylov iterations by the magnitude of the backward errors of the approximate steps; similarities of the asymptotical behavior of GMRES and MINPERT when used in a converging Newton method.
At the end of the paper, the theoretical results are verified on some numerical examples.
Authors
Emil Cătinaş
(Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy)
Keywords
nonlinear system of equations in Rn; inexact Newton method; perturbed Newton method; Krylov methods; linear systems of equation in Rn; backward errors; GMRES; GMBACK; MINPERT; Newton-Krylov methods; residual; local convergence; q-convergence order.
Cite this paper as:
E. Cătinaş, Inexact perturbed Newton methods and applications to a class of Krylov solvers, J. Optim. Theory Appl., 108 (2001) no. 3, pp. 543-570.
About this paper
Publisher Name
Kluwer Academic Publishers-Plenum Publishers
Print ISSN
0022-3239
Online ISSN
1573-2878
MR
0022-3239
Online ISSN
1573-2878
Google Scholar citations
[1] Dembo, R. S., Eisenstat, S. C., and Steigaug, T., Inexact Newton Methods, SIAM Journal on Numerical Analysis, Vol. 19, pp. 400–408, 1982. 568 JOTA: VOL. 108, NO. 3, MARCH 2001
[2] Ortega, J. M., and Rheinboldt, W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, NY, 1970.
[3] Kantorovich, L. V., and Akrilov, G. P., Functional Analysis, Editura S¸tiint¸ificaˇ s¸i Enciclopedicaˇ, Bucharest, Romania, 1986 (in Romanian).
[4] Pavaloiu, I., Introduction to the Theory of Approximating the Solutions of Equations, Editura Dacia, Cluj–Napoca, Romania, 1976 (in Romanian).
[5] Maruster St., Numerical Methods for Solûing Nonlinear Equations, Editura Tehnicaˇ, Bucharest, Romania, 1981 (in Romanian).
[6] Dennis, J. E., JR., and Schnabel, R. B., Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Series in Computational Mathematics, Prentice-Hall, Englewood Cliffs, New Jersey, 1983.
[7] Potra, F. A., and Ptak, V., Nondiscrete Induction and Iterative Processes, Pitman, London, England, 1984.
[8] Argyros, I., and Szidarovszky, F., The Theory and Applications of Iteration Methods, CRC Press, Boca Raton, Florida, 1993.
[9] Kelley, C. T., Iterative Methods for Linear and Nonlinear Equations, SIAM, Philadelphia, Pennsylvania, 1995.
[10] Rheinboldt, W. C., Methods for Solving Systems of Nonlinear Equations, SIAM, Philadelphia, Pennsylvania, 1998.
[11] Dennis, J. E., JR., and Walker, H. F., Inaccuracy in Quasi-Newton Methods: Local Improvement Theorems, Mathematical Programming Study, Vol. 22, pp. 70–85, 1984.
[12] Potra, F. A., and Ptak, K, V., Sharp Error Bounds for Newton’s Process, Numerische Mathematik, Vol. 34, pp. 63–72, 1980.
[13] Deuflhard, P., and Potra, F. A., Asymptotic Mesh Independence of Newton– Galerkin Methods via a Refined Mysovskii Theorem, SIAM Journal on Numerical Analysis, Vol. 29, pp. 1395–1412, 1992.
[14] Argyros, I., Concerning the Conûergence of Inexact Newton Methods, Journal of Computational and Applied Mathematics, Vol. 79, pp. 235–247, 1997.
[15] Argyros, I., On a New Newton–Mysoûskii-Type Theorem with Applications to Inexact Newton-Like Methods and Their Discretization, IMA Journal of Numerical Analysis, Vol. 18, pp. 37–56, 1998.
[16] Sherman, A. H., On Newton-Iterative Methods for the Solution of Systems of Nonlinear Equations, SIAM Journal on Numerical Analysis, Vol. 15, pp. 755– 771, 1978.
[17] Dennis, J. E., JR., and More , J. J., A Characterization of Superlinear Conûergence and Its Application to Quasi-Newton Methods, Mathematics of Computation, Vol. 28, pp. 549–560, 1974.
[18] Dennis, J. E., JR., and More, J. J., Quasi-Newton Methods, Motivation and Theory, SIAM Review, Vol. 19, pp. 46–89, 1977.
[19] Eisenstat, S. C., and Walker, H. F., Choosing the Forcing Terms in an Inexact Newton Method, SIAM Journal on Statistical and Scientific Computing, Vol. 17, pp. 16–32, 1996.
[20] Eisenstat, S. C., and Walker, H. F., Globally Convergent Inexact Newton Methods, SIAM Journal on Optimization, Vol. 4, pp. 393–422, 1994. JOTA: VOL. 108, NO. 3, MARCH 2001 569
[21] Ypma, T. J., The Effect of Rounding Errors on Newton-Like Methods, IMA Journal of Numerical Analysis, Vol. 3, pp. 109–118, 1983.
[22] Martinez, H. J., Parada, Z., and Tapia, R. A., On the Characterization of Q-Superlinear Convergence of Quasi-Newton Interior-Point Methods for Nonlinear Programming, Boletin de la Sociedad Matematica Mexicana, Vol. 1, pp. 137–148, 1995.
[23] Ypma, T. J., Local Convergence of Inexact Newton Methods, SIAM Journal on Numerical Analysis, Vol. 21, pp. 583–590, 1983.
[24] Cores, D., and Tapia, R. A., Perturbation Lemma for the Newton Method with Application to the SQP Newton Method, Journal of Optimization Theory and Applications, Vol. 97, pp. 271–280, 1998
[25] Potra, F. A., On Q-Order and R-Order of Convergence, Journal of Optimization Theory and Applications, Vol. 63, pp. 415–431, 1989.
[26] Brown, P. N., A Local Convergence Theory for Combined Inexact-Newton Finite-Difference Projection Methods, SIAM Journal on Numerical Analysis, Vol. 24, pp. 407–434, 1987.
[27] Catinas, E., A Note on Inexact Secant Methods, Revue d’Analyse Numerique et de Theorie de l’Approximation, Vol. 25, pp. 33–41, 1996.
[28] Krolikowska, M., Nonlinear Mappings with an Almost Sparse Jacobian Matrix, Annales Universitatis Mariae Curie–Skåodowska, Lublin, Poland, Vol. 49A, pp. 145–157, 1995.
[29] Rigal, J. L., and Gaches, J., On the Compatibility of a Given Solution with the Data of a Linear System, Journal of the Association for Computing Machinery, Vol. 14, pp. 543–548, 1967.
[30] Saad, Y., and Schultz, M. H., GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems, SIAM Journal on Scientific and Statistical Computing, Vol. 7, pp. 856–869, 1986.
[31] Saad, Y., Krylov, Subspace Methods for Solving Large Unsymmetric Linear Systems, Mathematics of Computation, Vol. 37, pp. 105–126, 1981.
[32] Kasenally, E. M., GMBACK: A Generalized Minimum Backward Error Algorithm for Nonsymmetric Linear Systems, SIAM Journal on Statistical and Scientific Computing, Vol. 16, pp. 698–719, 1995.
[33] Saad, Y., Iterative Methods for Sparse Linear Systems, PWS Publishing Company, Boston, Massachusetts, 1996.
[34] Brown, P. N., and SAAD, Y., Hybrid Krylov Methods for Nonlinear Systems of Equations, SIAM Journal on Scientific and Statistical Computing, Vol. 11, pp. 450–481, 1990.
[35] Turner, K., and Walker, H. F., Efficient High Accuracy Solutions with GMRES(m), SIAM Journal on Scientific and Statistical Computing, Vol. 13, pp. 815–825, 1992.
[36] Brown, P. N., and Hindmarsh, A. C., Reduced Storage Matrix Methods in Stiff ODE Systems, Applied Mathematics and Computation, Vol. 31, pp. 40– 91, 1989.
[37] Greenbaum, A., Ptak, V., and Strakos, Z., Any Nonincreasing Convergence Curve is Possible for GMRES, SIAM Journal on Matrix Analysis and Applications, Vol. 17, pp. 465–469, 1996. 570 JOTA: VOL. 108, NO. 3, MARCH 2001
[38] Brown, P. N., and Saad, Y., Convergence Theory of Nonlinear Newton–Krylov Algorithms, SIAM Journal on Optimization, Vol. 4, pp. 297–330, 1994.
[39] Brown, P. N., A Theoretical Comparison of the Arnoldi and GMRES Algorithms, SIAM Journal on Scientific and Statistical Computing, Vol. 12, pp. 58– 78, 1991.
[40] Kasenally, E. M., and Simoncini, V., Analysis of a Minimum Perturbation Algorithm for Nonsymmetric Linear Systems, SIAM Journal on Numerical Analysis, Vol. 34, pp. 48–66, 1997.
[41] Stewart, G. W., and Sun, J. G., Matrix Perturbation Theory, Academic Press, New York, NY, 1990.
[42] More, J. J., A Collection of Nonlinear Model Problems, Computational Solutions of Nonlinear Systems of Equations, Lectures in Applied Mathematics, Edited by E. L. Allgower and K. Georg, American Mathematical Society, Providence, Rhode Island, Vol. 26, pp. 723–762, 1990.
[43] Glowinski, R., Keller, H. B., and Reinhart, L., Continuation-Conjugate Gradient Methods for the Least-Squares Solution of Nonlinear Boundary-Value Problems, SIAM Journal on Scientific and Statistical Computing, Vol. 6, pp. 793–832, 1985.
[44] Catinas, E., Newton and Newton–Krylov Methods for Solving Nonlinear Systems in n , PhD Thesis, Babes–Bolyai University of Cluj–Napoca, Cluj–Napoca, Romania, 1999.
Paper (preprint) in HTML form
Inexact Perturbed Newton Methods and Applications to a Class of Krylov Solvers 1
Abstract
Inexact Newton methods are variant of the Newton method in which each step satisfies only approximately the linear system (Ref. 1 ). The local convergence theory given by the authors of Ref. 1 and most of the results based on it consider the error terms as being provided only by the fact that the linear systems are not solved exactly. The few existing results for the general case (when some perturbed linear systems are considered, which in turn are not solved exactly) do not offer explicit formulas in terms of the perturbations and residuals. We extend this local convergence theory to the general case, characterizing the rate of convergence in terms of the perturbations and residuals.
The Newton iterations are then analyzed when, at each step, an approximate solution of the linear system is determined by the following Krylov solvers based on backward error minimization properties: Gmres, Gmback, Minpert. We obtain results concerning the following topics: monotone properties of the errors in these Newton-Krylov iterates when the initial guess is taken 0 in the Krylov algorithms; control of the convergence orders of the Newton-Krylov iterations by the magnitude of the backward errors of the approximate steps; similarities of the asymptotical behavior of Gmres and Minpert when used in a converging Newton method. At the end of the paper, the theoretical results are verified on some numerical examples.
Key Words. Nonlinear systems, (inexact) Newton methods, (inexact) perturbed Newton methods, convergence orders, linear systems, backward errors, Krylov methods (Gmres, Gmback, Minpert), NewtonKrylov methods.
2 Research Fellow, T. Popoviciu Institute of Numerical Analysis, Romanian Academy of Sciences, Cluj-Napoca, Romania.
1. Introduction
Consider the system of nonlinear equations
| (1) |
where is a nonlinear mapping and suppose that:
(C1) There exists such that .
A classical approach for approximating is the Newton method, which results in the following algorithm:
(NM) Choose an initial approximation .
For until convergence, do the following steps.
Step 1. Solve .
Step 2. Set .
At each iteration of the Newton method, a routine for solving the resulting linear system must be called. The direct methods for linear systems cannot offer always the exact solution when used in floating point arithmetic and they may be inefficient for large general systems. Usually, some of the iterative methods cannot offer the exact solution after a finite number of steps even in exact arithmetic. Another fact is that, when is far from , it may be worthless to solve exactly the system. These reasons require a convergence analysis which takes into account some error terms.
Several Newton-type methods have been studied. Some sufficient conditions for different convergence orders have been given in Refs. 2-16 and references therein. A characterization of superlinear convergence and of convergence with orders , has been obtained by Dennis and Moré in Refs. 17-18 for sequences given by the following quasi-Newton method:
being a sequence of invertible matrices.
A characterization of local superlinear convergence and local convergence with orders , of the Newton methods which take into account other error terms has been given by Dembo, Eisenstat, and Steihaug (Ref. 1). They considered in their paper the following inexact Newton method:
(INM) Choose an initial approximation . For , until convergence, do the following steps.
Step 1. Find such that .
Step 2. Set .
The error terms (residuals) represent the amounts by which the solutions , determined in an unspecified manner, fail to satisfy the exact systems (NM). Their magnitudes are determined by the relative residuals , supposed to be bounded by the forcing sequence ,
| (2) |
The local convergence analysis given in Ref. 1 characterizes the convergence orders of given by the (INM) in terms of the magnitudes of ; see also Refs. 19-20 for other convergence results. However, in this paper as well as in most other papers using these results, the error terms are considered to appear only because the exact Newton systems (NM) are solved approximately. But in many situations, it is hard to find the exact value of , and in some cases even of . On the other hand, or its approximation and are both altered when represented in floating point arithmetic.
The question that arises naturally is: What magnitudes can we allow in perturbing the matrices and the vectors so that the convergence order of the resulting method does not decrease?
The Newton methods with perturbed linear systems have been considered by several authors (see for instance Refs. 9, 11, 21, and references therein), but these methods were not analyzed with respect to their convergence orders.
Martinez, Parada, and Tapia (Ref. 22) have analyzed the superlinear convergence of sequences given by the following damped and perturbed quasi-Newton method:
where , and ; but their superlinear convergence was not characterized simultaneously in terms of .
In Ref. 23, Ypma studied the case when the matrices and the vectors are calculated approximately, the resulting linear systems being solved inexactly. However, the residuals were considered to be incorporated in those perturbed linear systems. Consequently, the obtained convergence results do not offer explicit formulas in terms of the magnitude of the perturbations and residuals. Recently, Cores and Tapia (Ref. 24) have considered the exact solving of perturbed linear systems, but they have obtained only sufficient conditions on the perturbations for different convergence orders to be attained.
We consider here the perturbations and , being led to the study of the following inexact perturbed Newton method:
The terms denote the residuals of the approximate solutions of the linear systems
When these systems are assumed to be solved exactly [i.e., , in the (IPNM)], we call the resulting method a perturbed Newton method,
This frame will be useful for the study of some Newton-Krylov methods in connection with backward errors.
The paper is structured as follows. In Section 2, we give two convergence results for the (IPNM). In Section 3, we characterize the superlinear convergence and the convergence with orders , of the (IPNM), and we also provide some sufficient conditions. In Section 4, we analyze the Newton methods in which the linear systems from each step are solved by Gmres, Gmback, or Minpert. We verify the obtained theoretical results on numerical examples performed on two test problems (Section 5), and we end the paper with concluding remarks.
Throughout the paper, we consider the Euclidean norm (denoted by ) and an arbitrary given norm on (denoted by ), together with their induced operator norms. We use also the Frobenius norm of a matrix , defined as
We use the column notation for vectors; when we write vectors (or matrices) inside square brackets, we consider the matrix containing on columns those vectors (or the columns of those matrices). The same convention is used when joining rows. As usual, the vectors , form the standard basis in being clear from the context.
2. Convergence Results for Inexact Perturbed Newton Methods
The common conditions for the local convergence analysis of the Newton method are Condition (C1) and the following additional conditions:
(C2) The mapping is differentiable on a neighborhood of and is continuous at .
(C3) The Jacobian is nonsingular.
These conditions ensure that is a point of attraction for the Newton method; i.e., there exists such that given by the Newton method converges to for any initial approximation with .
Moreover, the convergence is -superlinear 3; see Ref. 2, Theorem 10.2.2 and Ref. 10, Theorem 4.4.
In the convergence analysis of the (IPNM) iterations, we assume that:
(C4) The perturbations are such that the matrices are nonsingular for .
Though different in notation, in fact this condition is similar to the one considered for the quasi-Newton iterates, which requires a sequence of invertible matrices .
The following sufficient condition for the convergence of the (INM) was proved by Dembo, Eisenstat, and Steihaug (Ref. 1).
Theorem 2.1. See Ref. 1. Assume Conditions (C1)-(C3) and . There exists such that, if , then the sequence of the (INM) iterates satisfying (2) converges to . Moreover, the convergence is linear, in the sense that
where .
Using this theorem, we obtain the following convergence results for the (IPNM).
Theorem 2.2. Assume Conditions (C1)-(C4) and , . There exists such that, if and , for , then the sequence of the (IPNM) iterates converges to , the convergence being linear,
Proof. The (IPNM) can be viewed as an (INM) with
Denoting
| (3) |
the conclusion follows from Theorem 2.1.
Corollary 2.1. Assume Conditions (C1)-(C4). There exists such that, if and
then the sequence of the (IPNM) iterates converges to . Moreover, the convergence is linear,
where .
Proof. The proof is obtained easily from the previous result making use of the hypotheses.
Remark 2.1. The idea of reducing certain perturbed Newton methods to (INM) iterations, which is used in the proof of Theorem 2.2, can be found in the work of several authors; see for example Refs. 9-10 and 26. The inexact secant methods considered by us in Ref. 27 are in fact instances of the (IPNM) model, but it was Ref. 28 that inspired us to consider the (IPNM).
3. Convergence Orders of Inexact Perturbed Newton Methods
Stronger conditions imposed on the continuity of at offer higher convergence orders for the Newton method. Namely, if is Hölder continuous at with exponent , i.e., if there exist and such that
then the Newton method converges locally with -order at least ; see Ref. 2, Theorem 10.2.2 and Ref. 10, Theorem 4.4.
For the inexact Newton methods, Dembo, Eisenstat, and Steihaug proved the following result.
Theorem 3.1. See Ref. 1. Assume that Conditions (C1)-(C3) hold and that the inexact Newton iterates converge to . Then, -superlinearly if and only if
Moreover, if is Hölder continuous at with exponent , then with -order at least if and only if
We obtain the following result for the inexact perturbed Newton methods.
Theorem 3.2. Assume that Conditions (C1)-(C4) hold and that the iterates given by the (IPNM) converge to . Then, -superlinearly if and only if
, as . Moreover, if is Hölder continuous at with exponent , then with -order at least if and only if
, as .
Proof. The proof is obtained from the previous theorem by using (3).
In the following result, we characterize the convergence orders of the (IPNM) iterates in terms of the rate of convergence to zero of residuals and perturbations.
Corollary 3.1. Assume that:
(a) Conditions (C1)-(C4) hold;
(b) , as ;
(c) the sequence given by the (IPNM) converges to .
In addition, if
then, -superlinearly.
Under the same assumptions (a)-(c), if additionally is Hölder continuous at with exponent , and if
then with -order at least .
The corresponding results for the -convergence orders of the (IPNM) iterates may be stated in a similar manner, taking into account the existing results for the (INM); see Ref. 1, Theorem 3.4 and Corollary 3.5.
4. Applications to Krylov Solvers Based on Backward Error Minimization Properties
Consider the nonsymmetric linear system
| (4) |
with nonsingular and . The Krylov methods for solving such a system when the dimension is large are based on the Krylov subspaces, defined for any initial approximation as
where
is the initial residual and .
The normwise backward error of an approximate solution of (4) was introduced by Rigal and Gaches (Ref. 29) and is defined by
where the parameters and are arbitrary. The value of is
and the minimum is attained by the backward errors
with ,
.
We shall analyze the behavior of the following three Krylov solvers when used for the linear systems arising in the Newton methods: Gmres, Gmback, Minpert. Each of these solvers is based on backward error minimization properties, as we shall see.
In order to use some existing notations, we prefer to denote hereafter by the solution of the nonlinear system (1) and by the iterates from the Newton methods, using in the involved linear systems.
4.1. Newton-Gmres Method. Given an initial approximation , the Gmres method of Saad and Schultz (Ref. 30) uses the Arnoldi process (see Ref. 31) to construct an orthonormal basis in the Krylov subspace . The approximation is then determined such that
| (5) |
Kasenally has noted in Ref. 32 that the minimizing property of may be expressed in terms of the backward errors, namely,
i.e, minimizes over the backward error , assuming .
The solution is determined roughly in the following way (see also Ref. 33):
(i) Arnoldi Process. Determine and the upper Hessenberg matrix .
(ii) Gmres Solution. Find the exact solution of the following least squares problem in :
The Gmres method is used in an iterative fashion. Saad and Schultz proved that, at each step, the solution is uniquely determined and that the algorithm breaks down in the Arnoldi method only if the exact solution has been reached. Moreover, the process terminates in at most steps. In the results presented below, we shall assume, as usually, that the Krylov methods do not continue after the exact solution has been determined.
First, we introduce some notations for the restarted Gmres iterations in order to express the relations which they satisfy.
Since only small values of are attractive in practice, an upper bound is usually fixed for the subspace dimensions. If after steps the computed solution does not have a sufficiently small residual, the Gmres method is restarted, taking for the initial approximation the last computed solution. We denote by , the first solutions, by , the solutions from the first restart, and so on. The value of the initial approximation will be clear from the context. For the nonrestarted version, we shall use the common notations , while for a generic Gmres solution 4, we shall simply write ; the corresponding notations for the th correction from a Newton-Gmres method will be .
The choice in the Krylov solvers is a popular one when no better guess is known; see e.g. Refs. 26 and 34-35. With the above notation, the following result is obtained immediately from the properties of the Gmres solutions. Certain affirmations from this result are more or less explicitly stated in some papers dealing with Gmres; see for example Refs. 26 and 36-37.
Proposition 4.1. Consider the linear system (4) and the initial approximation . Then, the following statements are true:
(i) For any , the residual of the Gmres solution satisfies
.
Moreover, the inequality is strict if and only if the solution is nonzero.
(ii) The residuals associated to the successive solutions satisfy
.
The inequality between the norms of two consecutive residuals is strict if and only if the corresponding Gmres solutions are distinct.
(iii) For any fixed upper bound , the residuals of the restarted Gmres method satisfy
The inequality between the norms of two consecutive residuals is strict if and only if the corresponding Gmres solutions are distinct; the residuals may eventually get to zero or may indefinitely stagnate, depending on the problem.
Considering the linear systems from the Newton method, we get easily the following proposition.
Proposition 4.2. Let be an element for which the derivative is nonsingular. Applying the Gmres method with to the linear system , then for all , the residual satisfies
Moreover, the above inequality is strict if and only if the correction is nonzero.
We have chosen to state the result corresponding only to part (i) of Proposition 4.1. The other results are similarly enounced.
Brown (Ref. 26) and Brown and Saad (Ref. 38) considered solving (1) by global minimization,
when . They obtained that any nonzero correction from a certain step of the Newton-Gmres method with in Gmres is a descent direction for the above minimization problem. Now, we are able to describe this positive behavior in terms of the distance to the solution.
Theorem 4.1. Assume that the mapping satisfies Conditions (C1)-(C3), and consider a current approximation for . Let be a nonzero approximate solution of the linear system provided by Gmres, assuming the initial guess . Let , denote , and take . If is sufficiently close to , then
where .
Proof. The thesis can be proved easily with slight modifications of the proof of Theorem 2.1, given in Ref. 1.
The above result says that, when using Gures (with ) in the Newton method, any nonzero correction improves the current outer
approximation, provided that the approximation is sufficiently good. Though this theorem guarantees a nonincreasing curve for the errors of the Newton-Gmres iterations starting from a sufficiently good approximation, the local convergence is not guaranteed. It is sufficient to notice that, considering a sequence obeying at each step the hypotheses of the above result, there may appear a situation when the set of forcing terms has 1 as accumulation point, in which case Theorem 2.1 cannot be applied.
The following theoretical example shows that, when the dimensions of the Krylov subspaces are smaller than , the outer iterations may stagnate at the initial approximation , no matter how close to we choose .
Example 4.1. Consider
for which
The Newton method for solving should yield the unique solution in one iteration for any , whenever the corresponding linear system is solved exactly. However, when the correction is determined approximately, the situation changes, and we may use the (INM) setting.
Taking , for some arbitrarily small , we must solve
i.e.,
Applying steps of the Arnoldi process with , we obtain and .
The Gmres solution is given by (see Refs. 36 and 39)
such that
and so
We also note that the restarting version of the Gmres yields the same result whenever and .
This theoretical example shows that the dimension of the Krylov subspaces may sometimes be crucial for the efficiency of the Newton-Gmres method, and that the use of the restarted version of Gmres cannot overcome the difficulties. The stagnation of the Gmres algorithm was first reported in Ref. 36 (where was as above and stood for ), but we are not aware of any extension to a (non)linear mapping in order to show such stagnation of the Newton-Gmres method.
4.2. Newton-Gmback Method. The Gmback algorithm for solving the linear system (4) was introduced by Kasenally in Ref. 32. Given , it computes a vector which minimizes the backward error in the matrix , assuming ,
The following steps are performed for determining .
(i) Arnoldi Process. Compute the matrices and .
(ii) Gmback Solution. Execute Steps (a), (b), (c), (d) below.
(a) Let ,
(b) Determine an eigenvector corresponding to the smallest eigenvalue of the generalized eigenproblem .
(c) If the first component is nonzero, compute the vector by scaling such that
(d) Set .
This algorithm may lead to two possible breakdowns, either in the Arnoldi method or in the scaling of . As in the case of Gmres, the
first breakdown is a happy breakdown, because the solution may be determined exactly using and . The second breakdown appears when all the eigenvectors associated to have the first component zero, the inevitable divisions by zero leading to uncircumventible breakdowns. In such case, either is increased or the algorithm is restarted with a different initial approximation . We shall assume in the following analysis that exists.
Kasenally proved that, for any and , the backward error corresponding to the Gmback solution satisfies
| (6) |
The Newton-Gmback iterates may be written in two equivalent ways, taking into account the properties of the Krylov solutions,
, where we considered
There are three results which may be applied to characterize the high convergence orders of these sequences (the corresponding statements are left to the reader): Theorem 2.1 of Dembo, Eisenstat, and Steihaug; the results of Dennis and Moré for the quasi-Newton methods (see Refs. 17-18); and Theorem 2.2 for the (IPNM) iterates, in which we must take , . Naturally, these results must be equivalent, but here we do not analyze this aspect.
Concerning the sufficient conditions, the convergence orders of the Newton-Gmback method may be controlled by the computed eigenvalues from Gmback.
Theorem 4.2. Consider the sequence of Newton-Gmback iterates , where satisfies
Assume the following:
(a) Conditions (C1)-(C4) hold.
(b) The derivative is Hölder continuous with exponent at .
(c) The sequence converges to .
Moreover, if
then the Newton-Gmback iterates converge with -order at least .
Proof. The proof follows from Corollary 3.1, taking into account formula (6), the inequality , true for all , and the fact that all the norms are equivalent on a finite-dimensional normed space.
We are interested now if the curve of the Newton-Gmback errors is nonincreasing when starting with a sufficiently good approximation. In the following, we shall construct an example which shows that, unlike the Newton-Gmres case, this property is not generally shared by the NewtonGmback method when Gmback is used in the nonrestarted version.
Example 4.2. Consider the system , with
having the unique solution . For all , the derivative of is given by
Taking with arbitrarily small , we must solve
or equivalently,
Applying steps of the Arnoldi process with , we obtain successively
The eigenpair ( ) of
is determined uniquely by
It follows that
and the Gmback, solution is
such that, when ,
4.3. Newton-Minpert Method. The Minpert method for solving (4) was introduced by Kasenally and Simoncini in Ref. 40. Given an initial approximation , it computes an element which minimizes the joint backward error,
| (7) |
In other words, minimizes the distance from the original system to the nearest one that an approximation actually satisfies. The above minimization problem may also be viewed as a total least squares problem in the Krylov subspace (see Ref. 40).
The algorithm is similar to Gmback, the only difference being given by the computation of the matrices from the eigenproblem
The following is a sketch of the steps performed by Minpert.
(i) Arnoldi Process. Compute the matrices and .
(ii) Minpert Solution.
(a) Let ,
(b) Determine an eigenvector corresponding to the smallest eigenvalue of the generalized eigenproblem .
(c) If the first component is nonzero, compute the vector by scaling such that
(d) Set .
The remarks concerning the breakdowns of the Gmback method hold also for Minpert.
Kasenally and Simoncini proved that, for any and , the following relations hold:
| (8) | |||
| (9) |
where and represent the backward errors and the residual corresponding to the Minpert solution .
We shall prove for Minpert a result somehow similar to Proposition 4.1.
Proposition 4.3. Consider the linear system (4), the initial approximation , and an arbitrary value . If there exists a Minpert solution , then its joint backward error satisfies
Proof. When , as noticed in Ref. 40, we are led to a regular eigenproblem in the Minpert algorithm, since . The boundedness of the Rayleigh quotient then implies
so, by (8), the conclusion is immediate.
The other two affirmations similar to those from Proposition 4.1 may be stated correspondingly with the single remark that, since the solution of the minimization problem (7) may not be uniquely determined, the inequality between two consecutive joint backward errors may not be strict even if the consecutive solutions and are distinct.
We also note that, when the initial approximation in Gmback is taken , one cannot similarly use the vector in the Fisher result [see, e.g., Ref. 41, Corollary VI.1.16, which says that in
order to bound by , since in this case
i.e., is an eigenvector corresponding to the infinite eigenvalue . Such a result could not be expected to hold in general, because by Corollary 3.1 it would imply that, for an arbitrary mapping satisfying Conditions (C1)-(C3) and with Lipschitz derivative, the Newton-Gmback iterations with in Gmback and obeying Condition (C4) converge locally with -order 2 even for one-dimensional Krylov subspaces.
The convergence orders of the Newton-Minpert iterates may be characterized by (9) in terms of the computed eigenvalues .
Theorem 4.3. Assume that Conditions (C1)-(C3) hold, that the derivative is Hölder continuous with exponent at , and that the sequence given by the Newton-Minpert iterations , with
converges to . Then, converges with -order at least if and only if
The direct application of Proposition 4.3 to the study of the monotonicity of the Newton-Minpert errors leads to a result similar to Theorem 4.1, but which requires some additional conditions.
Theorem 4.4. Assume that the mapping satisfies Conditions (C1)(C3), and consider a current approximation for . Let be a nonzero approximate solution of the linear system provided by Minpert with the initial guess . Let
If
and if is sufficiently close to , then
where .
The following example shows that the Newton-Minpert iterates may stagnate no matter how close to the solution we choose .
Example 4.3. Consider the mapping from Example 4.1, which leads us again to solving the linear system . The Minpert method with and yields the diagonal matrix , with and , such that, for any with , we get
Then, the unique choice follows, and so .
Remark 4.1. This example also shows that, for the linear systems above, the Minpert method behaves identically with Gmres; i.e., for and , it cannot improve the accuracy of the approximate solution, even in the restarted version. The systems with the same matrix , but with , constitute some examples when the Minpert method for a linear system leads to uncircumventible breakdowns. The stagnations and the uncircumventible breakdowns of Minpert were theoretically known to be possible, but we had not encountered concrete examples before.
We end the theoretical considerations by applying a result which relates the Gmres and Minpert approximations to the Newton iterations. However, we do not tackle here this problem in its general setting nor do we analyze the conditions required in this result.
Kasenally and Simoncini proved that the difference between the Gmres and Minpert solutions may be bounded under certain circumstances.
Theorem 4.5. See Ref. 40. Consider the same arbitrary elements and in the Gmres and Minpert methods for solving the linear system (4). Denote by the smallest eigenvalue of , i.e., the smallest squared singular value of , and assume that . Then,
An auxiliary result proved by these authors shows that the eigenvalues determined at the step interlace in the following way:
The bound from the above theorem is large whenever is close to and the result cannot be applied when ; such a situation arises for example when is a repeated eigenvalue.
Theorem 4.5 may determine theoretical possibilities when Gmres and Minpert have the same asymptotical behavior when used in a converging Newton method.
Theorem 4.6. Assume that Conditions (C1)-(C3) hold and that the sequence given by the Newton-Gmres method converges to , where the elements from the Gmres algorithm are arbitrary at each outer step. Denote by the corrections obtained from the linear systems
and consider the approximate solutions of these linear systems obtained with Minpert, which is assumed to use the same initial approximations, the same subspace dimensions, and the same number of restarts as Gmres. If , for , and , as , then Gmres and Minpert yield asymptotically the same normalized corrections:
The same result holds inverting the role of Gmres and Minpert.
Proof. The hypotheses of the theorem imply that
For the nonrestarted version of the Krylov solvers, the affirmation is a straightforward application of Theorem 4.5, while for the restarted version, the conclusion is obtained by an inductive argument on the number of restarts from each outer step.
5. Numerical Examples
We shall consider two test problems from Refs. 34 and 42, that will provide some nonlinear systems in . We shall apply to them the studied Newton-Krylov methods. The Krylov solvers are considered in the nonrestarted version and the initial guess is taken 0 in both the inner and outer iterations. We are interested in the behavior of the magnitude of the backward errors of these methods and in verifying theoretical results; therefore, we are not aiming to implement some efficient methods for solving the problems under consideration.
5.1. Bratu Problem. Consider the nonlinear partial differential equation
over the unit square of , with Dirichlet boundary conditions. As mentioned in Refs. 34 and 42, this is a standard problem, a simplified form of which is known as the Bratu problem (see Ref. 43). We have discretized by means of 5-point finite differences [respectively, by means of central finite differences] on a uniform mesh, obtaining a system of nonlinear equations of size , where is the number of mesh points in each direction. As in Ref. 34, we took such that the solution of the discretized problem is the constant unity. We have considered
The runs were made on a PC, using Matlab Version 4.0. The symbol denotes either the Euclidean norm or the Frobenius norm, and stand for .
Table 1 contains the results obtained by using the Newton-Gmres method. We have also considered the corrections obtained with Minpert at each step, denoting by the norm of the difference ; the corrections were computed only for the comparison. It can be seen that the normalized corrections agree the closer the iterations get to the solution.
Table 2 contains the results obtained by using the Newton-Gmback method. We notice the rather constant magnitudes of , which do not approach zero even when the iterates are close to the solution.
In Table 3, we have denoted
The bounds seem to be tight here for . As soon as the iterations approach the solution, the inequalities from Theorem 4.5 cease to hold numerically; we believe that this is due to the different types of errors which
| 0 | 3e+1 | 1e+2 | 9e-1 | 11 | 3e-4 | 1e-10 | 8e-6 | 1e-9 | |
|---|---|---|---|---|---|---|---|---|---|
| 1 | 9e-1 | 12 | 7e-11 | ||||||
| 2 | 3e-1 | 13 | |||||||
| 3 | 14 | ||||||||
| 4 | 6e+0 | 15 | |||||||
| 5 | 1e-1 | 1e-3 | 16 | 1e-6 | |||||
| 6 | 17 | 5e-14 | |||||||
| 7 | 18 | ||||||||
| 8 | 3e-4 | 19 | |||||||
| 9 | 1e-2 | 20 | 2e-19 | ||||||
| 10 | 1e-3 | 1e-5 | 21 | 5e-20 |
| 0 | 1e+1 | 6e-2 | 1e+0 | 11 | 1e-4 | 2e-5 | 4e-2 | 4e-6 | |
| 1 | 1e+0 | 5e-2 | 7e-1 | 12 | |||||
| 2 | 7e-1 | 6e-2 | 13 | 1e-2 | |||||
| 3 | 3e-1 | 14 | |||||||
| 4 | 3e-1 | 15 | |||||||
| 5 | 1e+0 | 16 | |||||||
| 6 | 1e-2 | 17 | |||||||
| 7 | 2e-1 | 1e-2 | 18 | ||||||
| 8 | 5e-4 | 19 | |||||||
| 9 | 5e-4 | 9e-5 | 20 | 7e-10 | |||||
| 10 | 9e-5 | 1e-2 | 2e-5 | 21 | 1e-11 |
| 0 | 1e+1 | 6e-2 | 1e+0 | ||||
| 1 | 1e+0 | 5e-2 | 1e+0 | 7e-1 | |||
| 2 | 7e-1 | 6e-2 | |||||
| 3 | 9e+0 | 3e-1 | |||||
| 4 | |||||||
| 5 | 1e+0 | 5e-2 | |||||
| 6 | 1e-2 | 1e-2 | 3.0808e-2 | ||||
| 7 | 1e-2 | ||||||
| 8 | 1e-3 | ||||||
| 9 | 1e-3 | 7e-4 | |||||
| 10 | 1e-4 | 1e-4 | |||||
| 11 | 3e-8 | ||||||
| 12 | 6e-6 | 6e-6 | |||||
| 13 | 6e-6 | 1e-6 | 1e-6 | ||||
| 14 | 1e-6 | ||||||
| 15 | 5.0888e-12 | ||||||
| 16 | |||||||
| 17 | 1e-8 | ||||||
| 18 | |||||||
| 19 | 1e-7 | ||||||
| 20 |
have appeared in computing the eigenpairs, the Krylov approximations, and the elements .
5.2. Driven Cavity Flow Problem. This is a classical problem from incompressible fluid flow. It has the following equations in the stream
function-vorticity formulation:
| (10) | |||
| (11) | |||
where denotes again the interior of the unit square from and the viscosity is the reciprocal of the Reynolds number . In terms of alone, (10) and (11) are replaced by
This equation was discretized again on a uniform mesh. We have considered , choosing the boundary conditions to be incorporated in the nonlinear system. We have used the symbolic facilities offered by Maple V Release 3.0 in order to determine the discretized equations. The Reynolds number was taken small ( ) and the subspace dimensions of the Krylov methods were considered large ( ), since the nonrestarted versions of these methods were not efficient for this problem.
Tables contain the results of the three Newton-Krylov methods applied to this problem. We notice again that, close to the solution, the computed elements and do not satisfy the inequalities .
| 0 | 9e-1 | 2e-2 | 0 | 6e-2 | |||||
| 1 | 1 | 9e+0 | 3e-1 | 1e+0 | |||||
| 2 | 2 | 6e+0 | 3e-1 | 1e+0 | |||||
| 3 | 1e+0 | 6e-1 | 3 | 1e+0 | 3e+0 | ||||
| 4 | 4 | ||||||||
| 131 | 9e-5 | 131 | 6e-1 | ||||||
| 132 | 132 | ||||||||
| 133 | 133 | 6e-1 | |||||||
| 134 | 134 | ||||||||
| 135 | 135 |
| 0 | 6e-2 | 7.0262e-2 | 7.8592e-2 | |||
| 1 | 1e+0 | |||||
| 2 | 6e+0 | 1e+0 | ||||
| 3 | 1e+0 | 3e+0 | ||||
| 4 | 1.6037e+0 | |||||
| … | ||||||
| 131 | 1e-5 | |||||
| 132 | 1e-5 | 1e-5 | ||||
| 133 | 1e-5 | 1e-5 | 1e-5 | |||
| 134 | 1e-5 | 1e-5 | ||||
| 135 | 1e-5 | 1e-5 | 1e-10 | 1e-5 |
In Fig. 1, we have plotted on a semilog scale the evolution of for the three methods. The first 150 steps were considered. Notice the monotone behavior of Newton-Gmres and Newton-Minpert starting from certain steps, and also notice the nonmonotone behavior of Newton-Gmback.
6. Conclusions
The local superlinear convergence and local convergence with orders , of the Newton methods were characterized in a more natural setting, which assumes that the Jacobians are perturbed, the function evaluations are performed approximately for , and the resulting linear systems are solved inexactly. The results of Dembo, Eisenstat, and Steihaug allowed us to extend their local convergence analysis to this setting.
The sufficient conditions for local convergence with different convergence orders show that the perturbations in the Jacobians may be allowed to have much greater magnitudes than those in the function evaluations and than the magnitudes of the residuals; this may be explained by the fact that the right-hand sides of the linear systems from the Newton method tend to zero, while the matrices tend to , assumed here to be nonsingular. It follows that the methods for solving the linear systems from the Newton methods must be analyzed in this respect and also that special care must be taken when the function evaluations may be affected by significant errors. The greater sensitivity of the right-hand sides has been known for a longer time (see for example Refs. 11, 21, and references therein) such that the evaluation of these vectors in double precision or extended precision is already a standard.
The existing results for the (INM) iterates allowed us to show that the Newton-Gmres iterates have a monotone property. We were able to prove that the Newton-Minpert iterates share this property only under some additional conditions, whereas from a concrete example we saw that Newton-Gmback iterates do not generally share this property. The convergence orders of the Newton-Gmback and Newton-Minpert methods can be controlled in terms of the magnitude of the backward errors of the approximate steps. The theoretical results were confronted with the performed numerical examples.
Many of the existing results for different Newton-type methods may be reconsidered in the (IPNM) setting. For example, the local convergence orders of some finite-difference Newton-Krylov methods studied by Brown (Ref. 26) now may be analyzed in the (IPNM) setting. On the other hand, new approaches can be developed as well; for instance, we should mention the local convergence and acceleration techniques for the (IPNM) in the case of singular Jacobian at the solution. Other results and directions of research are included in our PhD Thesis (Ref. 44).
References
-
1.
Dembo, R. S., Eisenstat, S. C., and Steihaug, T., Inexact Newton Methods, SIAM Journal on Numerical Analysis, Vol. 19, pp. 400-408, 1982.
-
2.
Ortega, J. M., and Rheinboldt, W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, NY, 1970.
-
3.
Kantorovich, L. V., and Akilov, G. P., Functional Analysis, Editura Ştiințificǎ şi Enciclopedicǎ, Bucharest, Romania, 1986 (in Romanian).
-
4.
Pãvăloiu, I., Introduction to the Theory of Approximating the Solutions of Equations, Editura Dacia, Cluj-Napoca, Romania, 1976 (in Romanian).
-
5.
Mâruşer, Şt., Numerical Methods for Solving Nonlinear Equations, Editura Tehnicǎ, Bucharest, Romania, 1981 (in Romanian).
-
6.
Dennis, J. E., Jr., and Schnabel, R. B., Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Series in Computational Mathematics, Prentice-Hall, Englewood Cliffs, New Jersey, 1983.
-
7.
Potra, F. A., and Pták, V., Nondiscrete Induction and Iterative Processes, Pitman, London, England, 1984.
-
8.
Argyros, I., and Szidarovszky, F., The Theory and Applications of Iteration Methods, CRC Press, Boca Raton, Florida, 1993.
-
9.
Kelley, C. T., Iterative Methods for Linear and Nonlinear Equations, SIAM, Philadelphia, Pennsylvania, 1995.
-
10.
Rheinboldt, W. C., Methods for Solving Systems of Nonlinear Equations, SIAM, Philadelphia, Pennsylvania, 1998.
-
11.
Dennis, J. E., Jr., and Walker, H. F., Inaccuracy in Quasi-Newton Methods: Local Improvement Theorems, Mathematical Programming Study, Vol. 22, pp. 70-85, 1984.
-
12.
Potra, F. A., and Pták, V., Sharp Error Bounds for Newton’s Process, Numerische Mathematik, Vol. 34, pp. 63-72, 1980.
-
13.
Deuflhard, P., and Potra, F. A., Asymptotic Mesh Independence of NewtonGalerkin Methods via a Refined Mysovskii Theorem, SIAM Journal on Numerical Analysis, Vol. 29, pp. 1395-1412, 1992.
-
14.
Argyros,, I., Concerning the Convergence of Inexact Newton Methods, Journal of Computational and Applied Mathematics, Vol. 79, pp. 235-247, 1997.
-
15.
Argyros, I., On a New Newton-Mysovskii-Type Theorem with Applications to Inexact Newton-Like Methods and Their Discretization, IMA Journal of Numerical Analysis, Vol. 18, pp. 37-56, 1998.
-
16.
Sherman, A. H., On Newton-Iterative Methods for the Solution of Systems of Nonlinear Equations, SIAM Journal on Numerical Analysis, Vol. 15, pp. 755771, 1978.
-
17.
Dennis, J. E., Jr., and Moré, J. J., A Characterization of Superlinear Convergence and Its Application to Quasi-Newton Methods, Mathematics of Computation, Vol. 28, pp. 549-560, 1974.
-
18.
Dennis, J. E., Jr., and Moré, J. J., Quasi-Newton Methods, Motivation and Theory, SIAM Review, Vol. 19, pp. 46-89, 1977.
-
19.
Eisenstat, S. C., and Walker, H. F., Choosing the Forcing Terms in an Inexact Newton Method, SIAM Journal on Statistical and Scientific Computing, Vol. 17, pp. 16-32, 1996.
-
20.
Eisenstat, S. C., and Walker, H. F., Globally Convergent Inexact Newton Methods, SIAM Journal on Optimization, Vol. 4, pp. 393-422, 1994.
-
21.
Ypma, T. J., The Effect of Rounding Errors on Newton-Like Methods, IMA Journal of Numerical Analysis, Vol. 3, pp. 109-118, 1983.
-
22.
Martinez, H. J., Parada, Z., and Tapia, R. A., On the Characterization of -Superlinear Convergence of Quasi-Newton Interior-Point Methods for Nonlinear Programming, Boletín de la Sociedad Matematica Mexicana, Vol. 1, pp. 137-148, 1995.
-
23.
Ypma, T. J., Local Convergence of Inexact Newton Methods, SIAM Journal on Numerical Analysis, Vol. 21, pp. 583-590, 1983.
-
24.
Cores, D., and Tapia, R. A., Perturbation Lemma for the Newton Method with Application to the SQP Newton Method, Journal of Optimization Theory and Applications, Vol. 97, pp. 271-280, 1998.
-
25.
Potra, F. A., On Q-Order and R-Order of Convergence, Journal of Optimization Theory and Applications, Vol. 63, pp. 415-431, 1989.
-
26.
Brown, P. N., A Local Convergence Theory for Combined Inexact-Newton/ Finite-Difference Projection Methods, SIAM Journal on Numerical Analysis, Vol. 24, pp. 407-434, 1987.
-
27.
Cătinaș, E., A Note on Inexact Secant Methods, Revue d’Analyse Numérique et de Théorie de l’Approximation, Vol. 25, pp. 33-41, 1996.
-
28.
Królikowska, M., Nonlinear Mappings with an Almost Sparse Jacobian Matrix, Annales Universitatis Mariae Curie-Skłodowska, Lublin, Poland, Vol. 49A, pp. 145-157, 1995.
-
29.
Rigal, J. L., and Gaches, J., On the Compatibility of a Given Solution with the Data of a Linear System, Journal of the Association for Computing Machinery, Vol. 14, pp. 543-548, 1967.
-
30.
Saad, Y., and Schultz, M. H., GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems, SIAM Journal on Scientific and Statistical Computing, Vol. 7, pp. 856-869, 1986.
-
31.
Saad, Y., Krylov Subspace Methods for Solving Large Unsymmetric Linear Systems, Mathematics of Computation, Vol. 37, pp. 105-126, 1981.
-
32.
Kasenally, E. M., GMBACK: A Generalized Minimum Backward Error Algorithm for Nonsymmetric Linear Systems, SIAM Journal on Statistical and Scientific Computing, Vol. 16, pp. 698-719, 1995.
-
33.
Saad, Y., Iterative Methods for Sparse Linear Systems, PWS Publishing Company, Boston, Massachusetts, 1996.
-
34.
Brown, P. N., and Saad, Y., Hybrid Krylov Methods for Nonlinear Systems of Equations, SIAM Journal on Scientific and Statistical Computing, Vol. 11, pp. 450-481, 1990.
-
35.
Turner, K., and Walker, H. F., Efficient High Accuracy Solutions with GMRES(m), SIAM Journal on Scientific and Statistical Computing, Vol. 13, pp. 815-825, 1992.
-
36.
Brown, P. N., and Hindmarsh, A. C., Reduced Storage Matrix Methods in Stiff ODE Systems, Applied Mathematics and Computation, Vol. 31, pp. 4091, 1989.
-
37.
Greenbaum, A., Pták, V., and Strakoš, Z., Any Nonincreasing Convergence Curve is Possible for GMRES, SIAM Journal on Matrix Analysis and Applications, Vol. 17, pp. 465-469, 1996.
-
38.
Brown, P. N., and Saad, Y., Convergence Theory of Nonlinear Newton-Krylov Algorithms, SIAM Journal on Optimization, Vol. 4, pp. 297-330, 1994.
-
39.
Brown, P. N., A Theoretical Comparison of the Arnoldi and GMRES Algorithms, SIAM Journal on Scientific and Statistical Computing, Vol. 12, pp. 5878, 1991.
-
40.
Kasenally, E. M., and Simoncini, V., Analysis of a Minimum Perturbation Algorithm for Nonsymmetric Linear Systems, SIAM Journal on Numerical Analysis, Vol. 34, pp. 48-66, 1997.
-
41.
Stewart, G. W., and Sun, J. G., Matrix Perturbation Theory, Academic Press, New York, NY, 1990.
-
42.
Moré, J. J., A Collection of Nonlinear Model Problems, Computational Solutions of Nonlinear Systems of Equations, Lectures in Applied Mathematics, Edited by E. L. Allgower and K. Georg, American Mathematical Society, Providence, Rhode Island, Vol. 26, pp. 723-762, 1990.
-
43.
Glowinsky, R., Keller, H. B., and Reinhart, L., Continuation-Conjugate Gradient Methods for the Least-Squares Solution of Nonlinear Boundary-Value Problems, SIAM Journal on Scientific and Statistical Computing, Vol. 6, pp. 793-832, 1985.
-
44.
Cătinaş, E., Newton and Newton-Krylov Methods for Solving Nonlinear Systems in Thesis, Babeş-Bolyai University of Cluj-Napoca, Cluj-Napoca, Romania, 1999.
