Consider the nonlinear equation \(H(x):=F(x)+G(x)=0\), with \(F\) differentiable and \(G\) continuous, where \(F,G,H:X \rightarrow X\), \(X\) a Banach space.
The Newton method for solving \(H(x)=0\) cannot be applied, and we propose an iterative method for solving the nonlinear equation, by combining the Newton method (for the differentiable part) with the chord/secant method (for the nondifferentiable part): \[x_{k+1} = \big(F^\prime(x_k)+[x_{k-1},x_k;G]\big)^{-1}(F(x_k)+G(x_k)).\]
We show that the r-convergence order of the method is the same as of the chord/secant method.
We provide some numerical examples and compare different methods for a nonlinear system in \(\mathbb{R}^2\).
Authors
E. Cătinaş, (Tiberiu Popoviciu Institute of Numerical Analysis)
[1] G. Goldner, M. Balazs, On the method of the chord and on a modification of it for the solution of nonlinear operator equations, Stud. Cerc. Mat., 20 (1968), pp. 981–990 (in Romanian).
[2] G. Goldner, M. Balazs, Observații asupra diferențelor divizate și asupra metodei coardei, Rev. Anal. Numer. Teoria Aproximatiei, 3 (1974) no. 1, pp. 19–30 (in Romanian). [English title: Remarks on divided differences and method of chords] article on journal website
[3] T. Yamamoto, A note on a posteriori error bound of Zabrejko and Nguen for Zincenko’s iteration, Numer. Funct. Anal. Optimiz., 9 (1987) 9&10, pp. 987–994. CrossRef
[4] T. Yamamoto, Ball convergence theorems and error estimates for certain iterative methods for nonlinear equations, Japan J. Appl. Math., 7 (1990) no. 1, pp. 131–143. CrossRef
[5] X. Chen, T. Yamamoto, Convergence domains of certain iterative methods for solving nonlinear equations, Numer. Funct. Anal. Optimiz., 10 (1989) 1&2, pp. 37–48. CrossRef
Paper (preprint) in HTML form
EMIL CĂTINAŞ
(Cluj-Napoca)
ON SOME ITERATIVE METHODS FOR SOLVING NONLINEAR EQUATIONS
1. INTRODUCTION
In the papers [3], [4] and [5] are studied nonlinear equations having the from:
(1)
where, is a Banach space, is a differentiable operator and is continuous but nondifferentiable. For this reason the Newton’s method, i.e. the approximation of the solution of the equation (1) by the sequence given by
(2) , cannot be applied.
In the mentioned papers the following Newton-like methods are then considered:
(3)
or
()
where is a linear operator approximating . It is shown that, under certain conditions, these sequences are converging to the solution of (1).
In the present paper, for solving equation (1), we propose the following method:
(4)
where by we have denoted the first order divided difference of at the points .
So, the proposed method is obtained by combining the Newton’s method with the method of chord. The r-convergence order of this method, denoted by , is the same as for the method of chord (where ), which is greater than the r-order of the methods (3) and (3’) (see also the numerical example), but is less than the r-order of Newton’s method (where usually ).
But, unlike the method of chord, the proposed method has a better rate of convergence, because the use of instead of , as it is in the method of chord, does not affect the coefficient from the inequalities of the type:
which we shall obtain in the following.
2. THE CONVERGENCE OF THE METHOD
We shall use, as in 1 and 2 the known definitions for the divided differences of an operator.
Definition 1. An operator belonging to the space (the Banach space of the linear and bounded operators from to ) is called the first order divided difference of the operator at the points if the following properties hold:
a) , for ;
b) if is Fréchet differentiable at , then
Definition 2. An operator belonging to the space , denoted by is called the second order divided difference of the operator at the points if the following properties hold:
for the distinct points ;
) if is two times differentiable at , then
We shall denote by the ball having the center at and the radius .
Concerning the convergence of the iterative process (4) we shall prove the following result.
Theorem 3. If there exist the elements and the positive real numbers and such that the conditions
i) the operator is Fréchet differentiable on and satisfies
ii) the operator is continuous on ,
iii) for any distinct points there exists the application and the inequality
is true;
iv) for any distinct points we have the inequality
v) the elements satisfy
vi) the following relations hold:
where is the Fibonacci’s sequence ,
are fulfilled, then the sequence generated by (4) is well defined, all its terms belonging to .
Moreover, the following properties are true:
j) the sequence is convergent;
jj) let . Then is a solution of the equation (1);
jjj) we have the a priori error estimates:
Proof. We shall prove first by induction that, for any ,
(5)
(6)
(7)
For , from ) and we infer the above relations.
Let us suppose now that relations (5), (6) and (7) hold for , where . Since , we can construct from (4), whence, using , we have
.
For the estimation of we shall rely on the equality
(easily obtained from Definition 1 and Definition 2), which imply, using iv),
(8)
and on the inequality
(9)
valid because of the assumptions concerning .
For , by (4), we get
whence
The above relation, together with (8), (9) and (6) for imply
From the hypothesis of the induction we have on one hand that
that is, (6) for , and, on the other hand
that is, (7) for .
The fact that results from:
Now we shall prove that is a Cauchy sequence, whence ) follows.
It is obvious that
for .
So, for any we have
Using Bernoulli’s inequality, it follows
Hence
and is a Cauchy sequence.
It follows that is convergent, and let . For in (4) we get that is a solution of (1). For in the above equality we obtain the very relation ).
The theorem is proved.
3. NUMERICAL EXAMPLE
Given the system
we shall consider . For we take .
We shall take as
Using method (3) with we obtain
0
1
0
1
1
0.333333333333333
2
0.906550218340611
0.354002911208151
3
0.885328400663412
0.338027276361332
4
0.891329556832800
0.326613976593566
5
0.895238815463844
0.326406852843625
6
0.895154671372635
0.327730334045043
7
0.894673743471137
0.327979154372032
8
0.894598908977448
0.327865059348755
9
0.894643228355865
0.327815039208286
10
0.894659993615645
0.327819889264891
11
0.894657640195329
0.327826728208560
12
0.894655219565091
0.327827351826856
13
0.894655074977661
0.327826643198819
39
0.894655373334687
0.327826521746298
Using the method of chord with , we obtain
0
5
5
1
1
0
2
0.989800874210782
0.012627489072365
3
0.921814765493287
0.307939916152262
4
0.900073765669214
0.325927010697792
5
0.894939851624105
0.327725437396226
6
0.894658420586013
0.327825363500783
7
0.894655375077418
0.327826521051833
8
0.894655373334698
0.327826521746293
9
0.894655373334687
0.327826521746298
10
0.894655373334687
0.327826521746298
Using method (4) with , we obtain
0
5
5
1
1
0
2
0.909090909090909
0.363636363636364
3
0.894886945874111
0.329098638203090
4
0.894655531991499
0.327827544745569
5
0.894655373334793
0.327826521746906
6
0.894655373334687
0.327826521746298
7
0.894655373334687
0.327826521746298
It can be easily seen that, given these data, method (4) is converging faster than (3) and than the method of chord.