Abstract
A classical model of Newton iterations which takes into account some error terms is given by the quasi-Newton method, which assumes perturbed Jacobians at each step. Its high q-convergence orders were characterized by Dennis and Moré [Math. Comp. 28 (1974), 549-560].
The inexact Newton method constitutes another such model, since it assumes that at each step the linear systems are only approximately solved; the high q-convergence orders of these iterations were characterized by Dembo, Eisenstat and Steihaug [SIAM J. Numer. Anal. 19 (1982), 400-408].
We have recently considered the inexact perturbed Newton method [J. Optim. Theory Appl. 108 (2001), 543-570] which assumes that at each step the linear systems are perturbed and then they are only approximately solved; we have characterized the high q-convergence orders of these iterates in terms of the perturbations and residuals.
In the present paper we show that these three models are in fact equivalent, in the sense that each one may be used to characterize the high q-convergence orders of the other two. We also study the relationship in the case of linear convergence and we deduce a new convergence result.
Authors
Emil Cătinaş
(Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy)
Keywords
nonlinear system of equations in Rn; inexact Newton method; perturbed Newton method; quasi-Newton methods; linear systems of equation in Rn; backward errors; GMRES; GMBACK; MINPERT; Newton-Krylov methods; residual; local convergence; convergence order.
Cite this paper as:
E. Cătinaş, The inexact, inexact perturbed and quasi-Newton methods are equivalent models, Math. Comp., 74 (2005) no. 249, pp. 291-301
doi: 10.1090/S0025-5718-04-01646-1
see also here: https://ictp.acad.ro/catinas/papers/2005%20Catinas%20-%20Math.%20Comp.%20-%20The%20inexact,%20inexact%20perturbed%20and%20quasi-Newton%20methods%20are%20equivalent%20models.pdf
About this paper
Journal
Publisher Name
American Mathematical Society
Print ISSN
0025-5718
Online ISSN
1088-6842
MR
?
ZBL
?
Google Scholar citations
[1] I. Argyros, Advances in the Efficiency of Computational Methods and Applications, World Scientific Publishing Co., Inc., River Edge, NJ, 2000.
CrossRef (DOI) MR 2002k:65003
[2] I. Argyros, Concerning the convergence of inexact Newton methods, J. Comp. Appl. Math. 79 (1997), 235–247.
CrossRef (DOI) MR 98c:47077
[3] R. E. Bank and D. J. Rose, Global approximate Newton methods, Numer. Math. 37 (1981), 279–295.
CrossRef (DOI) MR 83c:65127
[4] – , Parameter selection for Newton-like methods applicable to nonlinear partial differential equations, SIAM J. Numer. Anal. 17 (1980), 806–822.
CrossRef (DOI) MR 81m:65071
[5] P. N. Brown, A local convergence theory for combined inexact-Newton/finite-difference projection methods, SIAM J. Numer. Anal. 24 (1987), 407–434.
CrossRef (DOI) MR 88d:65088
[6] E. Catinas, Newton and Newton-Krylov Methods for Solving Nonlinear Systems in Rn, Ph.D. thesis, “Babes-Bolyai” University Cluj-Napoca, Romania, 1999.
[7] – , A note on the quadratic convergence of the inexact Newton methods, Rev. Anal. Numer. Th´eor. Approx. 29 (2000), 129–134. MR 2003i:65049
[8] – , Inexact perturbed Newton methods and applications to a class of Krylov solvers, J. Optim. Theory Appl. 108 (2001), 543–570.
CrossRef (DOI) MR 2002c:65074
[9] – , On the superlinear convergence of the successive approximations method, J. Optim. Theory Appl. 113 (2002), 473–485.
CrossRef (DOI) MR 2003e:65086
[10] – , Inexact perturbed Newton methods for nonlinear systems with singular Jacobians, submitted.
[11] – , Accurate finite differences in the Newton-Krylov methods, manuscript.
[12] R. S. Dembo, S. C. Eisenstat and T. Steihaug, Inexact Newton methods, SIAM J. Numer. Anal. 19 (1982), 400–408.
CrossRef (DOI) MR 83b:65056
[13] J. E. Dennis, Jr. and J. J. More, A characterization of superlinear convergence and its application to quasi-Newton methods, Math. Comp. 28 (1974), 549–560.
CrossRef (DOI) MR 49:8322
[14] -, Quasi-Newton methods, motivation and theory, SIAM Rev. 19 (1977), 46–89.
CrossRef (DOI) MR 56:4146
[15] J. E. Dennis, Jr. and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice-Hall Series in Computational Mathematics, Englewood Cliffs, NJ, 1983. MR 85j:65001
[16] P. Deuflhard and G. Heindl, Affine invariant convergence theorems for Newton’s method and extensions to related methods, SIAM J. Numer. Anal. 16 (1979), 1–10.
CrossRef (DOI) MR 80i:65068
[17] P. Deuflhard and F. A. Potra, Asymptotic mesh independence of Newton-Galerkin methods via a refined Mysovskii theorem, SIAM J. Numer. Anal. 29 (1992), 1395–1412.
CrossRef (DOI) MR 94a:65031
[18] S. C. Eisenstat and H. F. Walker, Choosing the forcing terms in an inexact Newton method, SIAM J. Sci. Comput. 17 (1996), 16–32.
CrossRef (DOI) MR 96k:65037
[19] M. Gasparo and B. Morini, Inexact methods: forcing terms and conditioning, J. Optim. Theory Appl. 107 (2000), 573–589.
CrossRef (DOI) MR 2001k:65089
[20] G. H. Golub and C. F. Van Loan, Matrix Computations, Third ed., The Johns Hopkins University Press, Baltimore, 1996. MR 97g:65006
[21] E. M. Kasenally, GMBACK: a generalised minimum backward error algorithm for nonsymmetric linear systems, SIAM J. Sci. Comput. 16 (1995), 698–719.
CrossRef (DOI) MR 95m:65054
[22] E. M. Kasenally and V. Simoncini, Analysis of a minimum perturbation algorithm for nonsymmetric linear systems, SIAM J. Numer. Anal. 34 (1997), 48–66.
CrossRef (DOI) MR 98a:65040
[23] C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations, SIAM, Philadelphia, 1995. MR 96d:65002
[24] St. Maruster, Numerical Methods for Solving Nonlinear Equations, Ed. Tehnica, Bucharest, Romania, 1981 (in Romanian).
[25] H. J. Martinez, Z. Parada and R. A. Tapia, On the characterization of Q-superlinear convergence of quasi-Newton interior-point methods for nonlinear programming, Bol. Soc. Mat. Mexicana 1 (1995), 137–148. MR 97d:90090
[26] B. Morini, Convergence behaviour of inexact Newton methods, Math. Comp. 68 (1999), 1605– 1613. MR 99m:65114
[27] J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. MR 42:8686
[28] I. Pavaloiu, Introduction to the Theory of Approximating the Solutions of Equations, Ed. Dacia, Cluj-Napoca, Romania, 1976 (in Romanian).
[29] F. A. Potra, On the numerical stability of a class of iterative processes, Itinerant Seminar of Functional Equations: Approximation and Convexity, Timi¸soara, Romania, 1983, 51–54.
[30] – , On a class of iterative procedures for solving nonlinear equations in Banach spaces, Computational Mathematics, Banach Center Publications, PWN-Polish Scientific Publishers, Warsaw, Poland, vol. 13, 607–621, 1984.
CrossRef (DOI) MR 87c:65068
[31] – , On Q-order and R-order of convergence, J. Optim. Theory Appl. 63 (1989), 415– 431. CrossRef (DOI) MR 91d:65077
[32] – , Q-superlinear convergence of the iterates in primal-dual interior-point methods, Math. Progr., to appear. MR 2002i:90120
[33] W. C. Rheinboldt, Methods for Solving Systems of Nonlinear Equations, Second ed., SIAM, Philadelphia, PA, 1998. MR 2000c:65001
[34] Y. Saad and M. H. Schultz, GMRES: a minimum residual algorithm, SIAM J. Sci. Statist. Comput. 7 (1986), 856–869.
CrossRef (DOI) MR 87g:65064
[35] H. F. Walker, An approach to continuation using Krylov subspace methods, in Computational Science in the 21st Century, M.-O. Bristeau, G. Etgen, W. Fitzgibbon, J. L. Lions, J. Periaux and M. F. Wheeler, eds., John Wiley and Sons, Ltd., 1997, 72–82.
[36] J. H. Wilkinson, The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, 1965. MR 32:1894
[37] T. J. Ypma, Local convergence of inexact Newton methods, SIAM J. Numer. Anal. 21 (1984), 583–590.
CrossRef (DOI) MR 85k:65043
Paper (preprint) in HTML form
THE INEXACT, INEXACT PERTURBED, AND QUASI-NEWTON METHODS ARE EQUIVALENT MODELS
Abstract
A classical model of Newton iterations which takes into account some error terms is given by the quasi-Newton method, which assumes perturbed Jacobians at each step. Its high convergence orders were characterized by Dennis and Moré [Math. Comp. 28 (1974), 549-560]. The inexact Newton method constitutes another such model, since it assumes that at each step the linear systems are only approximately solved; the high convergence orders of these iterations were characterized by Dembo, Eisenstat and Steihaug [SIAM J. Numer. Anal. 19 (1982), 400-408]. We have recently considered the inexact perturbed Newton method [J. Optim. Theory Appl. 108 (2001), 543-570] which assumes that at each step the linear systems are perturbed and then they are only approximately solved; we have characterized the high convergence orders of these iterates in terms of the perturbations and residuals.
In the present paper we show that these three models are in fact equivalent, in the sense that each one may be used to characterize the high convergence orders of the other two. We also study the relationship in the case of linear convergence and we deduce a new convergence result.
1. Introduction
Consider a nonlinear system , where . The local convergence of the Newton iterates
to a solution is usually studied under the following conditions, which will be implicitly assumed throughout this paper:
-
•
the mapping is Fréchet differentiable on , with continuous at ;
-
•
the Jacobian is invertible.
Given an arbitrary norm on , these hypotheses assure the existence of a radius such that the Newton iterates converge superlinearly to for any initial approximation with [27, Th.10.2.2] (see also [33, Th.4.4]).
2000 Mathematics Subject Classification. Primary 65H10.
Key words and phrases. Inexact, inexact perturbed and quasi-Newton methods, convergence orders. This research has been supported by the Romanian Academy under grant GAR 97/1999, and by the National Agency for Science, Technology and Innovation under grant GANSTI 6100 GR/2000.
Recall that an arbitrary sequence is said to converge -superlinearly (superlinearly, for short) to its limit if
| (1.1) |
also denoted by , as . For rigorous definitions and results concerning the high convergence orders, we refer the reader to [27, ch.9] and 31 (see also [33, ch.3] and 32]).
However, in many situations, different elements from the Newton iterations are only approximately determined. The first such case considers approximate Jacobians at each step, and leads to the quasi-Newton (QN) iterates
There exist a number of studies dealing with the approximation of by various techniques (see for instance [27, [15], 23], 33 and the references therein). The superlinear convergence of these sequences was characterized by Dennis and Moré. We state here a slightly weaker form of this result.
Theorem 1 ( ). Consider a sequence of invertible matrices and an initial approximation . If the QN iterates converge to , then they converge superlinearly if and only if
| (1.2) |
Another practical model of Newton iterates assumes that the linear systems from each step are not solved exactly:
The terms represent the residuals of the approximate solutions . Dembo, Eisenstat and Steihaug characterized the superlinear convergence of the inexact Newton (IN) method above.
Theorem . Assume that the IN iterates converge to . Then the convergence is superlinear if and only if
| (1.3) |
They also obtained the following local convergence result.
Theorem 3 ([12]). Given , there exists such that for any initial approximation with , the sequence of the IN iterates satisfying
| (1.4) |
converges to . Moreover, the convergence is linear in the sense that
| (1.5) |
where .
We have recently considered in [8] the inexact perturbed Newton (IPN) method
where represent perturbations to the Jacobians, perturbations to the function evaluations, while are the residuals of the approximate solutions of the perturbed linear systems .
We have obtained the following results 1
Theorem 4 ([8]). Assume that the IPN iterates are uniquely defined (i.e., the perturbations are such that the matrices are invertible for ) and converge to . Then the convergence is superlinear if and only if
, as .
Theorem 5 ([8]). Given , there exists such that if and the IPN iterates are uniquely defined, satisfying ,
where , then these iterates converge to at the linear rate
The same conclusion holds if the above condition is replaced by
where and .
Remark 1. It is not difficult to prove that, in fact, the above theorem also holds with and (instead of ).
The aim of this paper is to perform an analysis of the three methods mentioned in order to reveal the natural connection between them. This will allow us to obtain sharp conditions ensuring the local convergence of the inexact perturbed, and quasi-Newton methods.
We shall show first that each one of the inexact, inexact perturbed, and quasiNewton methods may be used to characterize the high convergence orders of the other two. In this sense, we remark (see [8]) that the proofs of Theorems 4 and 5 were obtained by rewriting the IPN iterations as IN iterations having the residuals
and then applying Theorems 2 and 3, respectively. We also note that the IN model is a particular instance of the IPN model. These facts show the equivalence of these
two models regarding their linear and superlinear convergence; the same connection appears in fact for the convergence orders , under supplementary Hölder continuity conditions on at .
It remains therefore to analyze the connection between the inexact and the quasiNewton iterations. This will be done in the following section, while in we shall give a new local linear convergence result and relate some existing ones.
2. Superlinear convergence of inexact and quasi-Newton methods
We begin this section by presenting some auxiliary results.
Walker has shown that the convergence of an arbitrary sequence from is tightly connected to the convergence of its corrections.
Lemma 1 ([35]. Consider an arbitrary sequence converging to some element . Then the convergence is superlinear if and only if the corrections converge superlinearly to zero. In case of superlinear convergence it follows that
The last affirmation of this lemma was known for a longer time (see [13]).
The following result was given by Dembo, Eisenstat and Steihaug2
Lemma 2 ([12]). Let and . Then there exists such that
Before stating our results, denote ; the quasi-Newton iterates are transcribed as
and condition (1.2) characterizing their superlinear convergence becomes
| (2.1) |
Now we are able to present the results relating the superlinear convergence of the IN and QN methods. First, we shall regard the QN iterates as IN iterates:
condition (1.3) characterizing their superlinear convergence becomes
| (2.2) |
The first step is accomplished by the following result.
Theorem 6. Conditions (2.1) and (2.2) are equivalent.
Proof. Some obvious reasons show that (2.1) holds iff
Lemmas 1 and 2/show that the sequences , and converge superlinearly to zero only at the same time, which ends the proof.
Remark 2. As noticed in [14], condition , as , is sufficient but not also necessary for (2.2) to hold.
Formulas (2.1) and (2.2) do not explicitly require the invertibility of the perturbed Jacobians at each step. Consequently, one may restate Theorem 1 by demanding the corresponding iterates only to be well defined; i.e., the linear systems to be compatible. In this sense, Theorem 1 (as well as, in fact, Theorem 2) can be retrieved from the following extension of Theorem 4
Theorem 7. Assume that the IPN iterates are well defined and converge to . Then the convergence is superlinear if and only if
| (2.3) |
Proof. One may use Theorem 2 in a straightforward manner by writing the IPN iterates as IN iterates with residuals .
The utility of this result comes out for example when analyzing the local convergence of the two Newton-Krylov methods described below.
Example 1. a) Given a linear system nonsingular, , an arbitrary initial approximation to the solution of this linear system, and denoting , the GMBACK solver 21 determines an approximation by the minimization problem 3
As with all the Krylov solvers, the method is advantageous when good approximations are obtained for small subspace dimensions being supposed large. Depending on the parameters, the problem may have no solution at all, a unique solution , or several (at most ) solutions. In the first case the algorithm may be continued either by increasing or by restarting with a different . In the second case the matrix is invertible, while in the third case this matrix is not invertible, but the linear system is still compatible.
The superlinear convergence of the Newton-GMBACK iterates written as
may therefore be characterized by Theorem 7, taking and 0 , since if the iterates converge we do not mind if they are not uniquely defined.
Apart from theoretical interest, the use of the QN model for these iterations is worth considering also from the computational standpoint, when the residuals are expensive to evaluate. Indeed, according to Remark 2, condition as is sufficient for the converging Newton-GMBACK iterates to attain superlinear rate (see also [8]). This provides an alternative in controlling the convergence rate of the above method, since the magnitude of may be estimated by computing the smallest eigenvalue of a generalized eigenproblem of (low) dimension , during the same process of determining .
b) The other Krylov solver we mention is the MINPERT method 22, which minimizes the joint backward error :
Theorem 7 is again the choice for characterizing the superlinear rate of the Newton-MINPERT iterations when framed in the perturbed Newton method
with the remark that the convergence of these iterates may be characterized by eigenvalues computed in the inner steps (see 8).
Returning to the analysis of the IN and QN methods, it remains to write the IN as QN iterates. We get
condition (2.1) characterizing their superlinear convergence being transcribed as
| (2.4) |
The equivalence of the QN and IN models is completed by the following result, which again has a straightforward proof.
Theorem 8. Conditions (1.3) and (2.4) are equivalent.
Remark 3. a) In case of superlinear convergence of the IN iterates, the invertibility of the matrices is automatically satisfied from a certain step. Indeed, since
(see [20, P.2.3.9]), some standard arguments show that the assumptions on the mapping assure the stated property.
b) Condition (1.3) appeared in a natural way in characterizing the convergence orders of the IN iterates; it is especially suitable for example in the case of the standard Newton-GMRES method, when the norms of the residuals may be cheaply computed at each inner step , without the cost of forming the actual corrections [34. However, in some situations this condition may require unnecessarily small residuals (oversolving), as reported in several papers (see, e.g., [18]).
According to Lemmas 1 and 2, the sequences , and converge superlinearly to zero only at the same time, and therefore one may devise some combinations to use instead of (1.3). We mention the following condition, which characterizes the quadratic convergence of the IN iterations:
It emerged naturally by backward error analysis [7], and it clearly shows that the oversolving does not appear when the corrections are sufficiently large. We intend to analyze the controlling of the convergence orders in a future work.
3. Local linear convergence of the IPN method
Morini [26] and Gasparo and Morini [19] have obtained some local linear convergence results for the iterates
| (3.1) | |||
We shall relate them with Theorem 5, but in its special instances for the QN and IN sequences.
We notice first that, similarly to Theorem 7 one may easily prove the following result.
Theorem 9. Given , there exists such that for any initial approximation with , if the IPN iterates are well defined and satisfy
| (3.2) |
then they converge to and obey
Since condition (1.4) is known to be sharp for ensuring the local convergence of the IN iterates, the same property follows for (3.2) concerning the IPN method.
We may also obtain the sharp condition regarding the QN model by taking in (3.2). When the QN iterates are uniquely defined, Theorem 5 yields another sufficient condition for convergence (by Remark 1, we took ),
| (3.3) |
which is not always sharp since it is deduced using an estimate of the form .
There exist few local linear convergence results for the QN method in the literature, despite the frequent use of this model. A first result was obtained by Ortega and Rheinboldt [27, p.311], who considered a mapping and
The local linear convergence result for the above sequence is followed by the Ostrowski attraction fixed point theorem, under the strong basic assumption that is continuous at and, moreover,
where eigenvalue of denotes the spectral radius of . In our notation, the above condition becomes
and is implied, for example, when .
Other results we are aware of can be retrieved from those in [26] and [19], as we shall see in the following. In these papers conditions were assumed of the form . The invertible matrices arise in the context of preconditioning strategies for solving linear systems. We shall consider here the case , in order to be able to relate with the previous results. We recall that the condition number of in norm is given
by . The mapping was assumed to belong to the class , i.e., obeying the following additional assumptions:
-
•
the set (on which is defined) is open;
-
•
the derivative is continuous on ;
-
•
the solution is unique in the ball and ;
-
•
for all one has
These hypotheses implied the existence of such that is invertible for all and
where .
The following result was obtained.
Theorem 10 ([26]). Let the approximations to be invertible and satisfy for all the properties
| (3.4) | |||
Let , denote cond , with . If
where , then the sequence given by (3.1) and obeying , is uniquely defined and converges to , with
For the case of the quasi-Newton iterates we take in the above theorem, being lead to relation
| (3.5) |
while conditions (3.3) imply the convergence rate (1.5), which can be estimated in norm by
| (3.6) |
Though assumptions (3.4) and (3.3) seem to be somehow similar, the above estimated upper bound appears to be larger than the one in (3.5).
For the IN iterates we take and in Theorem 10, and denoting one gets the following upper bound for the -factor defined in (1.1):
Theorem 3 attracts (3.6), and since is arbitrary, we arrive at a similar bound: . However, the assumptions in Theorem 10 are obviously stronger than .
The results in [19] show somehow similar bounds for , and again explicit inverse proportionality between the condition numbers of and the
forcing terms , but under weaker smoothness assumptions on (more exactly, requiring only continuity, and not also the Lipschitz-type condition involving the constant ).
The following aspects are known to occur in practical applications, when the condition numbers are large. First, linear convergence in norm does not necessarily attract linear convergence (or linear convergence with sufficiently good rate) in norm , required in certain problems. Second, the excessively small residuals required to ensure good convergence properties affect the overall efficiency of the method (by additional inner iterates in solving the linear systems).
The results in 26 and 19 show that the use of preconditioners reducing the condition numbers allow larger forcing terms. Another important feature is that the condition number involved is not of the Jacobian at the solution (which is not known) but of the Jacobian (or preconditioned perturbed Jacobian) at the current approximation. The estimators of the condition numbers bring the practical utility of these results.
Conclusions
We have proved that the inexact, the inexact perturbed and the quasi-Newton methods are related in a natural way: the conditions for characterizing their high convergence orders remain invariant under reconsidering the source(s) of the error terms. This approach allowed us to obtain a new convergence result, but it also shows that any property specific to one model of perturbed Newton method may now be transcribed to the other models. For example, the affine invariant conditions for the Newton and the inexact Newton methods (see [16], [17, 37] and [26]) may be considered for the inexact perturbed and the quasi-Newton methods.
Another example of transposing a class of iterations in a different frame can be found in [9], where the successive approximations for smooth iteration mappings were regarded as IN sequences, and the Ostrowski attraction fixed point theorem was refined by characterizing the fast convergent trajectories. This approach is full of potential for further developments, and we should mention, for instance, the obtaining of estimates for the radius of the attraction balls.
Acknowledgment. The author would like to thank an anonymous referee for careful reading of the manuscript and for useful suggestions.
References
[1] I. Argyros, Advances in the Efficiency of Computational Methods and Applications, World Scientific Publishing Co., Inc., River Edge, NJ, 2000. MR 2002k:65003
[2] I. Argyros, Concerning the convergence of inexact Newton methods, J. Comp. Appl. Math. 79 (1997), 235-247. MR 98c:47077
[3] R. E. Bank and D. J. Rose, Global approximate Newton methods, Numer. Math. 37 (1981), 279-295. MR 83c:65127
[4] __, Parameter selection for Newton-like methods applicable to nonlinear partial differential equations, SIAM J. Numer. Anal. 17 (1980), 806-822. MR 81m:65071
[5] P. N. Brown, A local convergence theory for combined inexact-Newton/finite-difference projection methods, SIAM J. Numer. Anal. 24 (1987), 407-434. MR 88d:65088
[6] E. Cătinas, Newton and Newton-Krylov Methods for Solving Nonlinear Systems in . thesis, "Babeş-Bolyai" University Cluj-Napoca, Romania, 1999.
[7] _, A note on the quadratic convergence of the inexact Newton methods, Rev. Anal. Numér. Théor. Approx. 29 (2000), 129-134. MR 2003i:65049
[8] __, Inexact perturbed Newton methods and applications to a class of Krylov solvers, J. Optim. Theory Appl. 108 (2001), 543-570. MR 2002c:65074
[9] __, On the superlinear convergence of the successive approximations method, J. Optim. Theory Appl. 113 (2002), 473-485. MR 2003e:65086
[10] __, Inexact perturbed Newton methods for nonlinear systems with singular Jacobians, submitted.
[11] _, Accurate finite differences in the Newton-Krylov methods, manuscript.
[12] R. S. Dembo, S. C. Eisenstat and T. Steihaug, Inexact Newton methods, SIAM J. Numer. Anal. 19 (1982), 400-408. MR 83b:65056
[13] J. E. Dennis, Jr. and J. J. Moré, A characterization of superlinear convergence and its application to quasi-Newton methods, Math. Comp. 28 (1974), 549-560. MR 49:8322
[14] _, Quasi-Newton methods, motivation and theory, SIAM Rev. 19 (1977), 46-89. MR 56:4146
[15] J. E. Dennis, Jr. and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice-Hall Series in Computational Mathematics, Englewood Cliffs, NJ, 1983. MR 85j:65001
[16] P. Deuflhard and G. Heindl, Affine invariant convergence theorems for Newton’s method and extensions to related methods, SIAM J. Numer. Anal. 16 (1979), 1-10. MR 80i:65068
[17] P. Deuflhard and F. A. Potra, Asymptotic mesh independence of Newton-Galerkin methods via a refined Mysovskii theorem, SIAM J. Numer. Anal. 29 (1992), 1395-1412. MR 94a:65031
[18] S. C. Eisenstat and H. F. Walker, Choosing the forcing terms in an inexact Newton method, SIAM J. Sci. Comput. 17 (1996), 16-32. MR 96k:65037
[19] M. Gasparo and B. Morini, Inexact methods: forcing terms and conditioning, J. Optim. Theory Appl. 107 (2000), 573-589. MR 2001k:65089
[20] G. H. Golub and C. F. Van Loan, Matrix Computations, Third ed., The Johns Hopkins University Press, Baltimore, 1996. MR 97g:65006
[21] E. M. Kasenally, GMBACK: a generalised minimum backward error algorithm for nonsymmetric linear systems, SIAM J. Sci. Comput. 16 (1995), 698-719. MR 95m:65054
[22] E. M. Kasenally and V. Simoncini, Analysis of a minimum perturbation algorithm for nonsymmetric linear systems, SIAM J. Numer. Anal. 34 (1997), 48-66. MR 98a:65040
[23] C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations, SIAM, Philadelphia, 1995. MR 96d:65002
[24] Şt. Măruşter, Numerical Methods for Solving Nonlinear Equations, Ed. Tehnică, Bucharest, Romania, 1981 (in Romanian).
[25] H. J. Martinez, Z. Parada and R. A. Tapia, On the characterization of -superlinear convergence of quasi-Newton interior-point methods for nonlinear programming, Bol. Soc. Mat. Mexicana 1 (1995), 137-148. MR 97d:90090
[26] B. Morini, Convergence behaviour of inexact Newton methods, Math. Comp. 68 (1999), 16051613. MR 99m:65114
[27] J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. MR 42:8686
[28] I. Păvăloiu, Introduction to the Theory of Approximating the Solutions of Equations, Ed. Dacia, Cluj-Napoca, Romania, 1976 (in Romanian).
[29] F. A. Potra, On the numerical stability of a class of iterative processes, Itinerant Seminar of Functional Equations: Approximation and Convexity, Timişoara, Romania, 1983, 51-54.
[30] _, On a class of iterative procedures for solving nonlinear equations in Banach spaces, Computational Mathematics, Banach Center Publications, PWN-Polish Scientific Publishers, Warsaw, Poland, vol. 13, 607-621, 1984. MR 87c:65068
[31] _, On Q-order and R-order of convergence, J. Optim. Theory Appl. 63 (1989), 415431. MR 91d:65077
[32] _, Q-superlinear convergence of the iterates in primal-dual interior-point methods, Math. Progr., to appear. MR 2002i:90120
[33] W. C. Rheinboldt, Methods for Solving Systems of Nonlinear Equations, Second ed., SIAM, Philadelphia, PA, 1998. MR 2000c:65001
[34] Y. Saad and M. H. Schultz, GMRES: a minimum residual algorithm, SIAM J. Sci. Statist. Comput. 7 (1986), 856-869. MR 87g:65064
[35] H. F. Walker, An approach to continuation using Krylov subspace methods, in Computational Science in the 21st Century, M.-O. Bristeau, G. Etgen, W. Fitzgibbon, J. L. Lions, J. Periaux and M. F. Wheeler, eds., John Wiley and Sons, Ltd., 1997, 72-82.
[36] J. H. Wilkinson, The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, 1965. MR 32:1894
[37] T. J. Ypma, Local convergence of inexact Newton methods, SIAM J. Numer. Anal. 21 (1984), 583-590. MR 85k:65043
Romanian Academy, "T. Popoviciu" Institute of Numerical Analysis, P. O. Box 68-1, Cluj-Napoca 3400, Romania
E-mail address: ecatinas@ictp.acad.ro
