In this paper we present new iterative algorithms in convex metric spaces. We show that these iterative schemes are convergent to the fixed point of a single-valued contraction operator. Then we make the comparison of their rate of convergence. Additionally, numerical examples for these iteration processes are given.
Authors
Cristian Daniel Alecsa
Babes-Bolyai University, Department of Mathematics, Cluj-Napoca, Romania,
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy, Cluj-Napoca, Romania
C.-D. Alecsa, On new faster fixed point iterative schemes for contraction operators and comparison of their rate of convergence in convex metric spaces, International Journal Of Nonlinear Analysis And Applications, 8 (2019) no. 1, pp. 353-388. doi: 10.22075/ijnaa.2017.11144.1543
[1] M. Abbas and T. Nazir, A new faster iteration process applied to constrained minimization and feasibility problems, Matematicki Vesnik 66 (2014) 223–234.
[2] R. Agarwal, D. O’Regan and D.R. Sahu, Iterative construction of fixed points of nearly asymptotically nonexpansive mappings, Journal of Nonlinear and Convex Analysis 8 (2007) 61–79.
[3] S. Akbulut and M. Ozdemir, Picard iteration converges faster than Noor iteration for a class of quasi-contractive operators, Chiang Mai J.Sci. 39 (2012) 688–692.
[4] V. Berinde, Picard iteration converges faster than Man iteration for a class of quasi-contractive operators, Fixed Point Theory Appl. 2014 (2004):1.
[5] M.R. Bridson and A. Haefliger, Metric Spaces of Non-Positive Curvature, Springer-Verlag, Berlin, Heidelberg, New York, 1999.
[6] H. Fukhar-ud-din and V. Berinde, Iterative methods for the class of quasi-contractive type operators and comparsion of their rate of convergence in convex metric spaces, Filomat 30 (2016) 223–230.
[7] K. Goebel and W. Kirk, Iteration processes for nonexpansive mappings, Topological Methods in Nonlinear Functional Analysis, Contemporary Mathematics 21, Amer. Math. Soc. Providence (1983), 115–123.
[8] F. Gursoy and V.Karakaya, A Picard-S hybrid type iteration method for solving a differential equation with retarded argument, http://arxiv.org/abs/1403.2546
[9] H.V. Machado, A characterization of convex subsets of normed spaces, Kodai Math. Sem. Rep. 25 (1973) 307–320.
[10] R. Chugh, P.Malik and V.Kumar, On a new faster implicit fixed point iterative scheme in convex metric spaces, J. Function Spaces (2015), Article ID 905834.
[11] W. Phuengrattana and S. Suantai, Comparison of the rate of convergence of various iterative methods for the class of weak contractions in Banach Spaces, Thai J. Math. 11 (2013) 217–226.
[12] S. Reich and I. Safrir, Nonexpansive iteration in hyperbolic spaces, Nonlinear. Anal. 15 (1990) 537–558.
[13] W. Sintunavarat and A. Pitea, On a new iteration scheme for numerical reckoning fixed points of Berinde mappings with convergence analysis, J. Nonlinear Science Appl. 9 (2016) 2553–2562.
[14] S. Suantai, Weak and strong convergence criteria of Noor iterations for asymptotically nonexpansive mappings, J. Math. Anal. Appl. 311 (2005) 506–517.
[15] W. Takahashi, A convexity in metric spaces and nonexpansive mappings, Kodai Math. Sem. Rep. 22 (1970) 142–149.
[16] L.A. Talman, Fixed points for condensing multifunctions in metric spaces with convex structure, Kodai Math. Sem. Rep. 29 (1977) 62–70.
[17] T. Zamfirescu, Fixed point theorems in metric spaces, Arch. Math. (Basel), 23 (1972) 292–298.
Paper (preprint) in HTML form
On new faster fixed point iterative schemes for contraction operators and comparison of their rate of convergence in convex metric spaces
Cristian Daniel Alecsa
Babeş-Bolyai University, Department of Mathematics, Cluj-Napoca, Romania, Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy, Cluj-Napoca, Romania
Abstract
In this paper we present new iterative algorithms in convex metric spaces. We show that these iterative schemes are convergent to the fixed point of a single-valued contraction operator. Then we make the comparison of their rate of convergence. Additionally, numerical examples for these iteration processes are given.
Most of the real world problems of applied sciences are, in general, functional equations. Such equations can be written as fixed point equations. Then, it is necessary to develop an iterative process which approximate the solution of these equations that has a good rate of convergence.
Many studies in the field of fixed point theory concerning the existence and uniqueness of fixed points of singlevalued contractions have been developed using basic iterative algorithms, such as : Picard iteration, Krasnoselksii, Mann and Ishikawa iterative processes. Over the years the interest regarding the speed of convergence of such iterations grew very fast. For example, many authors
considered numerous iteration processes and studied their rate of convergence. For this see [1]-[4, [6][8] and [10]-14. Some iterations were introduced to study the fixed points of the contractions. Also, others [12] were introduced for the context of nonexpansive mappings. Furthermore, some authors [6] compared the rate of convergence for some iterative algorithms for the class of quasi-contractions. Finally, since the class of convex metric spaces is larger than the well-known class of linear normed spaces, we shall work in the context of convex metric spaces introduced by W. Takahashi.
Our aim is to introduce new iteration processes and prove that these are faster than most of the classical iterations found in literature, in suitable circumstances. We support analytic proof by some numerical examples.
In the present research paper, we work on a nonlinear domain, more explicitly on a convex metric space. Following [15], let ( ) a metric space and a mapping called a convexity structure. If
then ( ) is called a convex metric space. Additionally, following [16], we have that , for all .
A nonempty subset of a convex metric space is convex, if , for all .
We remind the reader of two important basic example of convex metric spaces : spaces and linear normed spaces. For details, we let the reader follow [5] and [15]. Other important examples are : hyperbolic spaces introduced by Goebel and Kirk and hyperbolic spaces in the sense of Reich and Safrir. For details one can follow : [7] and [12].
Simplifying some existing iteration processes in the literature, we recall the definition of Machado from [9] of general convex combinations defined on convex metric spaces:
For and with , we define the multiple convex combination of
and , if .
In the next section, we will work in the cases when and . For the simplicity of this remark, we consider that . The other case is obvious and follows from the above definition.
We make the convention that, for , we have:
, where .
Furthermore, we remind that we have used the following property of convex metric spaces : and . For , we have that
, where , as in the case when . Also, we have that .
In the next sections, we will work under the definition of convex metric spaces and with the notions of single-valued contractions. We recall here this concept.
Definition 1.1. Let ( ) be a metric space and an operator. We say that is a -contraction if there exists , such that :
, for each .
For the simplicity of notations, we will use instead of , for each .
Also, if ( ) is a complete metric space, then has a unique fixed point in .
As we shall present some iterative algorithms defined by multiple convex combinations and compare their rate of convergence, we shall remind some defintions of convergence suitable for the case of metric spaces. For details, see [6], [10] and [1].
Definition 1.2. Let and be two sequences of positive numbers that converge to , respectively b. Assume that there exist the following limit
(i) If , then it is said that converge faster to than to .
(ii) If , then it is said that and have the same rate of convergence.
Definition 1.3. Suppose that we have two iteration sequences and both converging to a fixed point .
Let and be two sequences of positive numbers, such that:
where and converging to 0 . If converge faster than in the sense of (Definition 1.2), then is said to converge faster than to .
In this article, we use as references the articles of Abbas, Nazir, Gursoy, Karakaya and Berinde. In [1, [8] and [4]. The authors used for comparing the rate of convergence of new iterations with Picard iteration, the definitions (1.2) and (1.3).
In [11, [6] and [4], Suantai, Berinde et al. made the following remark: that the original definition for comparison of rate of convergence depends on the sequences and , where the already presented sequences appear as auxiliary sequences in some iterative processes. Therefore, the definitions (1.2) and (1.3) are not consistent and this method of comparing the rate of convergence of two iterative algorithms is unclear.
In [11], Phuengrattana and Suantai proposed a new definition in convex metric spaces (see [6]).
Definition 1.4. If and are two iterative sequences that converge to the unique fixed point of , then converges faster than , if
In the case of convex metric spaces, if we use the above definition for comparing the rate of convergence of two iterative schemes, then we need the following property (see [6] and [11]).
Remark 1.5. For each and , we have that
In the entire fixed point literature, there are a lot of classical iteration schemes defined on normed linear spaces and on metric spaces endowed with a convexity structure. Following [6] and [10], we shall remind some of them, but with the remark that, in the research article [10], the authors use a modified version of convex metric space, that is the hyperbolic space in the sense of Goebel and Kirk. So, the iterative schemes will be defined with the inverse order of the two sequence terms appearing in the convexity structure :
Let be a convex subset of the convex metric space ( ) and be a contraction mapping. Moreover, let be sequences in ( 0,1 ). The classical Noor iteration is
Putting we have that , for each , we get the well know Ishikawa iteration in convex metric spaces:
Putting , then , for each , we get the well know Mann iteration in convex metric spaces:
(1.3)
Furthermore, we remind that we can employ a condition from hyperbolic spaces, which is satisfied in linear normed spaces, i.e. : , for each and . This conditions is not at all restrictive and it has the advantage that the iteration terms in the convexity structure can be swapped one with another and this does not affect convergence of the fixed point iteration.
Moreover, we recall the basic fixed point iteration which appears in Banach contraction principle, that is Picard iteration:
(1.4)
Other interesting iteration algorithms are the implicit iterations. Following [10], we recall: The implicit Noor iteration
Putting , then , for each , we get the implicit Ishikawa iteration in convex metric spaces:
Additionally, putting , it follows that , for each ; we get the implicit Mann iteration:
(1.7)
Now, we recall sufficient conditions for the convergence to the fixed point of a contraction mapping of Noor iteration, respectively implicit Noor iteration.
Remark 1.6. Since Noor iteration is more general than Ishikawa and Mann iterations, we shall remind that, the classical Noor iteration (1.1) is convergent to the fixed point of the contraction mapping , if . In a similar way, since implicit Noor iteration is more general than implicit Mann and implicit Ishikawa iterations, we remind that implicit Noor algorithm (1.5) is convergent when .
Following [3], we remind that
Definition 1.7. Let ( ) be a metric space and a map for which there exists the real numbers and satisfying such that for each pair , at least one of the following is true
,
(z2) ,
(z3) .
Then is called a Zamfirescu operator. Morevorer, by [17], if ( ) is a complete metric space, has a unique fixed point.
Regarding contraction mappings and Zamfirescu operators, we have the following
Remark 1.8. In [3], in the context of normed linear spaces, it is shown that Picard iteration converges faster than Noor iteration for Zamfirescu operators. The same remark can be applied in the context of convex metric spaces as well. Since, any contraction is a Zamfirescu operator by condition ( ), the above property remains true for the contraction mappings. Moreover, because of the complex computations of convergence in the case of some iterative schemes, we will use in the next sections the condition that must be a contraction with the contraction constant . The same proofs can be applied in the same way to Zamfirescu operators. We let this as an open problem.
We present our three goals that we will gain throughout the next sections
(A.) Let be a nonempty convex subset of a normed space and a -contraction map.
In 2005 Suantai [14] introduced a modified Noor iterative method with sequences
and
In the case when is a nonempty convex subset of a convex metric space , Berinde modified the above iteration with the use of the convexity structure and defined the iteration as follows
For the convergence of the above iteration to the fixed point of the nonlinear contraction mapping, we remind the following
Remark 1.9. Following the same article of Berinde [6], the above iteration is convergent when the next assumptions are satisfied
for each and .
Moreover, we have that , for each .
Our first goal of the present research article is to find at least a faster iteration that (1.9) with some assumptions on the sequences and that makes usage of the definition of multiple convex combinations introduced by Machado in [9].
(B.) In [2], Agarwal et al. presented a new iteration defined on nonempty convex subset on normed spaces, that can be adapted easily on convex metric spaces. This iteration is defined by and
The above iteration (1.10) was introduced as an example of an iteration that is faster than Picard iteration (1.4), with respect to (Definition 1.2 ) and (Definition 1.3 ).
In the context of nonempty convex subset of a convex metric space, the above iteration is defined by and
In 1 Abbas and Nazir improved the above iteration and they presented a three-step iteration. We will present it in the context of the convex metric space
From the same paper [1], we recall the following convergence concept.
Remark 1.10. The above iteration (1.12) is convergent when the next assumptions are satisfied: and .
In this case, we have that , for each .
In the fixed point literature we can find other classical iterations. From [8], we will recall them in the context of convex metric spaces
iteration, with and
iteration, with and
iteration, with and
Additionally, in [8] Gursoy and Karakaya presented a modified Picard-S hybrid iteration, that is faster than all of the classical iterations (1.3), (1.2), (1.1), (1.13), (1.14) and (1.15) :
We recall from 8 that the above iteration (1.16) is faster than S iteration (1.14) and the last one is faster than Picard iteration (1.4). So this iteration answers the question of Agarwal et al., i.e. it is indeed an example of an iterative process that is faster with respect to convergence than Picard’s.
Moreover, we have the following results concerning iteration (1.16)
Remark 1.11. The iteration (1.16) is convergent when . Also, we have that , for each .
We let the reader get into details for the following remark.
Remark 1.12. When the sequence satisfies , iteration (1.16) is faster than iteration (1.12), in the sense of (Definition 1.2 ) and (Definition 1.3 ).
Also, regarding the question of Agarwal, Sintunavarat and Pitea in [13] introduced a new iteration better than that of Agarwal’s and of Picard. That is iteration, with and
From the same paper, we recall the assumptions under which the iteration (1.17) is convergent to the unique fixed point of the contraction mapping.
Remark 1.13. If and , with , then the iteration (1.17) is faster to the fixed point of than iteration (1.11).
It is clearly obvious that this iteration requires stronger conditions that the above ones and so we can eliminate from our discussion. So, another goal of this paper is to find iterations with better rate of convergence than (1.16), which implies that the iteration we seek is faster than (1.12),(1.16) and (1.4), also regarding (Definition 1.2) and (Definition 1.3).
(C.) The last goal of this paper is to present a faster implicit-type Noor iteration, faster than the already existing in literature implicit Noor iteration (1.5). Then, we want to modify this one through multiple convex combinations and analyze the rate of convergence.
2. Convergence Analysis
Let impose in the convex metric space the property of hyperbolic spaces in the sense of Goebel and Kirk, that is , for each and . This property is easily satisfied in a linear normed space.
The first main result of this section improve Suantai’s iteration (1.9) on convex metric spaces. The next iteration is an implicit algorithm made by multiple convex combinations. Let’s call it Suantai implicit
In terms of simple convex combinations, this iteration is
Our first results of this section concerns under what condition iteration (2.1) is convergent to the unique fixed point of a -contraction.
Theorem 2.1. Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let and sequences in such that . Then in (2.1) is convergent to the unique fixed point of .
Proof. We evaluate
.
Then, we get that .
In a similar way, we evaluate
.
Then, we get that .
For , we have that
.
Then, we get that .
Combining the above results, we have the estimation
, where and , for each .
It is easy to see that, because of , results that .
Also,
.
In a similar manner . Since and , we have that , for each .
So, . This means that :
(2.2)
In view of the fact that , for , the above inequality (2.2) reduces to
Since , letting , we get
where is the unique fixed point of the -contraction operator .
As particular cases of iteration (2.1) we get classical iterations, such as implicit Noor, respectively implicit Ishikawa iterative processes.
Remark 2.2. In (2.1), taking , we get Implicit Noor iteration (1.5) and taking , we get Implicit Ishikawa iteration (1.6).
In the following we present a Noor-type implicit iteration which is faster than the original implicit Noor (1.5). Let’s call it Noor implicit II
Now, we present our second result regarding the conditions under which iteration (2.3) is convergent to the unique fixed point of a -contraction.
Theorem 2.3. Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let sequences in ( 0,1 ). Then in (2.3) is convergent to the unique fixed point of .
Proof . First, we evaluate
.
That is, .
In a similar way, we estimate
.
We get that, .
For , we have
.
We get that, .
Combining above results, results that :
(2.4)
From the assumption of contraction operator that and from the assumption that the sequences and are positive, it implies that:
, for each . So, we obtain
.
Letting , because of , we get , so the sequence is convergent to the unique fixed point of .
Now, we present an useful remark concerning iteration (2.3).
Remark 2.4. The above iteration (2.3) has weak hypotheses, because, without any other assumptions on the sequences and , for each , so is a contraction sequence.
Iteration (2.3) is faster than Picard iteration (1.4) in the sense of definitions 1.2 and 1.3, as follows
Remark 2.5. In the case of Picard iteration (1.4), the sequence has the term between two consecutive elements. The above iteration (2.3) is evidently faster convergent than Picard iteration, because of the term in the approximation of the sequence , in the sense of definitions (1.2) and (1.3).
Now, we can combine Gursoy-Karakaya iteration (1.16) with implicit Noor II iteration (2.3). Let’s call it GKN implicit II :
Theorem 2.6. Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let , sequences in ( 0,1 ). Then in (2.5) is convergent to the unique fixed point of .
Proof . First, we estimate
.
That is, .
In a similar way, we evaluate
.
We get that, .
For , we have
.
Combining above results, we obtain that :
(2.6)
From the assumption of contraction operator that and from the assumption that the sequences and are positive, it implies that:
, for each . So, we obtain
.
Letting , because of , we get , so the sequence is convergent to the unique fixed point of .
Remark 2.7. The above iteration (2.5) has weak hypotheses, because, without any other assumptions on the sequences and , for each .
In the spirit of definition (1.2) and (1.3), iteration 2.5) is faster than Picard iteration, as follows
Remark 2.8. In the case of Picard iteration (1.4), the sequence has the term between two consecutive elements. The above iteration (2.5) is evidently a faster convergent iteration than Picard, because of the term in the approximation of the sequence with the general term , following definitions (1.2) and (1.3).
The next two iterations are modified version of (2.5) through multiple convex combinations (or simply m.c.c). The iterative scheme presented below will be called GKN implicit II with multiple convex combinations 1, or simply GKN implicit II m.c.c. 1
and in the case of simple convex combinations
For our new iteration (2.7), we present the conditions in which our iteration convergences to the unique fixed point of a -contraction, as follows
Theorem 2.9. Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let sequences in ( 0,1 ). Then in (2.7) is convergent to the unique fixed point of .
Proof. We evaluate the following distance
.
That is, .
We estimate the following
.
That is, .
For , we have that :
.
Combining these results, we get that
(2.8)
Since and is a sequence of positive numbers, we get that , so , where .
Because of , for each , so .
Finally, it implies that .
Letting , because , we get that converges to the unique fixed point of the -contraction operator .
The next remark concerns some particular cases of iteration (2.7).
Remark 2.10. If we put in the above iteration (2.7), we get iteration (2.5).
Additionally, putting in iteration (2.3), we get iteration (2.5).
Let’s call the next iterative scheme by GKN implicit II with multiple convex combinations 2, or simply implicit II m.c.c. 2
In the case of simple convex combinations, we have
For the convergence of iteration (2.9) to the unique fixed point of a contraction mapping, we have the following.
Theorem 2.11. Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let sequences in ( 0,1 ). Then in (2.9) is convergent to the unique fixed point of .
Proof . We evaluate
.
That is, .
Now, we estimate the following distance
.
That is, .
For , we have
.
Combining the above results, we get that
(2.10)
Since and is a sequence of positive numbers, we get that , so .
Moreover, , this means that .
So, .
Letting , because , we get that converges to the unique fixed point of the -contraction operator .
A particular case of the iterative process (2.9) is presented in the next remark.
Remark 2.12. If we put in the above iteration (2.9), we get iteration (2.5).
Let’s call the next iteration GKN implicit II with multiple convex combinations 3, or simply GKN implicit II m.c.c. 3
, and in the case of simple convex combinations :
Now, we shall present a theorem for the convergence of iteration (2.11) to the unique fixed point of the contraction mapping, as follows.
Theorem 2.13. Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let sequences in ( 0,1 ). Then in (2.11) is convergent to the unique fixed point of .
Proof . We have that
.
That is, .
In a similar way, it follows that
.
That is, .
For , we have
.
From the above results, we get that
(2.12)
Since and is a sequence of positive numbers, we get that , so , where .
Because of , for each .
Finally, it implies that .
Letting , because , we get that converges to the unique fixed point of the -contraction operator .
Now we present a particular case of the iteration algorithm (2.11).
Remark 2.14. If we put in the above iteration (2.11), we get iteration (2.5).
Finally, we will study two iterations which are modified implicit Noor II-type iterations through multiple convex combinations.
Let’s call the first one Implicit Noor II with multiple convex combinations, or simply, IN II m.c.c. This is defined, through multiple convex combinations, as
With simple convex combinations, the iteration become
Concerning the convergence of the iteration (2.13), we have the following theorem.
Theorem 2.15. Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let sequences in . Then in (2.13) is convergent to the unique fixed point of .
Proof . We evaluate the following distance
.
So, we have .
In the same manner, we get that
.
So, .
For , it follows that
.
So, it follows that
(2.14)
Because , we get that and .
The above computations imply that :
, with .
With the assumption that , for each , it follows that .
Then, .
Since , letting , we have that , that means the sequence is convergent to the unique fixed point of .
As a particular case of iteration (2.13), we have the following.
Remark 2.16. Putting , we get iteration (2.3).
We present the last iteration. Let’s call it Double Implicit Noor II with multiple convex combinations, or simply, DIN II m.c.c. This is defined, through multiple convex combinations, as
With simple convex combinations, the iteration become
In our last theorem of this section, sufficient conditions for the convergence of the iterative process (2.15) are presented.
Theorem 2.17. Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let sequences in . Then in (2.15) is convergent to the unique fixed point of .
Proof . As in the above proofs, we estimate the following distance
.
So, we have .
In the same manner, it follows that
.
So, .
This means that .
For , we have
.
So, .
This means that
(2.16)
Since , we get that .
Then, .
Since , letting , we have that , that means the sequence is convergent to the unique fixed point of .
In the next remark, a particular case of the iteration (2.15) is presented.
Remark 2.18. Putting , we get iteration (2.3).
In our last two remarks of this section, we refer to the convergence of iteration of iteration (2.1), respectively to the case ( ) of our first section.
Remark 2.19. It is important to observe the strong property that except the first iteration introduced by us, that is (2.1), depend on convergence only from .
Remark 2.20. In the remarks (2.5) and (2.8), we answered the question posed by Agarwal in 2 . and found iterations with better rate of convergence than iteration (1.10). The points (A) and (C) from the first section are solved in the next section.
3. Rate of Convergence
•
Using the definition for rate of convergence given by Berinde, Suantai et. al. in [6] and [11], we provide sufficient conditions for some iterations given by us to converge better than iteration (1.9). More exactly, the first three iterations are compared with iteration (1.9), using definition (1.4), since iteration (1.9) was considered by Berinde et.al. in convex metric spaces and compared with various classical iterations with the already mentioned definition.
Our first main result of this section relates to the comparing of the reate of convergence of iterations (2.1) and (1.9).
Theorem 3.1. Let be the sequence defined by iteration (2.1), that is
Let be the sequence defined by iteration (1.9), that is
Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let and sequences in ( 0,1 ) such that .
Additionally, let’s suppose that the following assumptions are satisfied :
(C1) ,
(C2) and ,
(C3) ,
, for each .
Then, iteration (2.1) converges faster than (1.9).
Proof. We know that and , with
,
We make the following evaluation
.
Since , we get that
.
Since , we obtain
.
We know that .
In a similar manner, we have that
.
So :
.
This means that . .
So, let’s denote by
,
Let’s denote .
We have that
From assumption (C1), we get that
From assumptions (C2), we have that, .
Moreover, from assumption (C3), we get .
Then, we get that , which implies that .
This means that .
In conclusion, iteration (2.1) converges faster than (1.9).
Now, our first remark of this section, useful in the last section regarding numerical examples, refers to the condition ( ) of the previous theorem.
Remark 3.2. The condition ( ) from the above theorem represent the condition such that the denominator is positive.
Also, we observe that , so if in our next theorems we have a relation between a coefficient, for example , then, in numerical examples, we must take .
In our second theorem of this section we compare the rate of convergence of the iterations (2.15) and (1.9).
Theorem 3.3. Let be the sequence defined by the iteration (2.15), that is
Let be the sequence defined by iteration (1.9), that is
Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let and sequences in ( 0,1 ) such that .
Additionally, let’s suppose that the following assumptions are satisfied :
(C1) and ,
(C2) and
, for each .
Then, iteration (2.15) converges faster than iteration (1.9).
Proof. We know that and , with
,
.
Let’s denote . We have that
. , because and , for each .
From assumptions (C1) and (C2), we have that
.
Then, we get that , which implies that .
This means that .
In conclusion, iteration (2.15) converges faster than iteration (1.9).
In our third main result, the comparison of the rate of convergence between the iterations (2.13) and (1.9) is presented.
Theorem 3.4. Let be the sequence defined by the iteration (2.13), that is
Let be the sequence defined by iteration (1.9), that is
Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let and sequences in ( 0,1 ) such that .
Additionally, let’s suppose that the following assumptions are satisfied :
(C1) and ,
(C2) and
, for each . Then, iteration (2.13) converges faster than iteration (1.9).
Proof. We know that and , with
,
.
Let’s denote . We factorize and simplify the terms in and we get
.
From assumptions (C1) and (C2), we get that
.
So we get that , which implies that .
This means that .
In conclusion, iteration (2.13) converges faster than iteration (1.9).
•
Now, we compare the rate of convergence between the iterations given by Gursoy, Karakaya, Abbas, Nazir et. al, and our improved iterations with convex combinations or based on improved implicit Noor iteration, following (1.2) and (1.3), using the technique present in the articles (1) and [8] and [10]. So from now until the next section we use the already mentioned definitions as in the articles of Gursoy, Karakaya, Abbas et. al.
A comparison between the rate of convergence of the iterations (2.3) and (2.5) is presented below.
Theorem 3.5. Let be the sequence defined by iteration (2.3), that is
Let be the sequence defined by iteration (2.5), that is
Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let sequences in .
Additionally, let’s suppose that the following assumptions are satisfied :
(C1) ,
(C2) .
Then, iteration (2.3) converges faster than (2.5).
Proof. We know that and , with
,
.
Let’s denote .
We have that .
From assumption (C1), we get that .
From assumption (C2), we get that .
Then, we have that .
Then, we get that , which implies that .
This means that .
In conclusion, iteration (2.3) converges faster than (2.5).
Regarding iterations (2.5) and (1.16), sufficient conditions are presented such that iteration (2.5) is faster than iteration (1.16).
Theorem 3.6. Let be the sequence defined by the iteration (2.5), that is
Let be the sequence defined by the iteration (1.16), that is
Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let , sequences in ( 0,1 ), with .
Additionally, let’s suppose that the following assumptions are satisfied :
(C1) ,
(C2) ,
(C3) .
Then, iteration (2.5) converges faster than iteration (1.16).
Proof. We know that and , with
,
.
Let’s denote .
We have that .
We know that .
Also, from assumptions (C1) and (C3), we have that .
Then, .
From assumption (C2), we have that .
Then, we get that , which implies that .
This means that .
Now, by definitions (1.2), respectively (1.3), since and converge to 0 in the hypothesis assumptions, it follows that : iteration (2.5) converges faster than iteration (1.16).
Comparing the rate of convergence between the iterations (2.5) and (2.11), we have the following.
Theorem 3.7. Let be the sequence defined by the iteration (2.5), that is
Let be the sequence defined by the iteration (2.11), that is
Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let sequences in ( 0,1 ).
Suppose the following assumptions are satisfied:
(C1) and ,
(C2) ,
Then, iteration (2.11) converges faster than (2.5), with respect to definitions (1.2) and (1.3).
Proof. We know that and , with
,
Then, .
Let’s denote .
Then, we have that .
From assumption (C1), we have that .
From assumption (C2), we get .
It is easy to see that , because from (C1).
We will define . Then .
Moreover, .
So we get that , which implies that .
This means that .
In conclusion, iteration (2.11) converges faster than (2.5).
Now, in our next theorem we make a comparison between the rate of convergence of the iterations (2.11) and (2.9) in a complete convex metric space, under the assumptions of convergence of both iterative processes.
Theorem 3.8. Let be the sequence defined by the iteration (2.11), that is
Let be the sequence defined by the iteration (2.9), that is
Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let sequences in ( 0,1 ).
Let’s suppose the following assumptions are satisfied:
(C1) and ,
(C2) ,
(C3) .
Then, iteration (2.11) converges faster than iteration (2.9).
Proof. We know that and , with
.
This means that ,
.
Also let , with . and .
Let’s denote .
Then, we have that .
Assumption (C1) implies that .
From assumption (C2), we get .
From assumption (C3), since , it implies that , that is .
So we get that , which implies that .
This means that .
In conclusion, iteration (2.11) converges faster than (2.9).
For the comparison between the iterative algorithms (2.9) and (2.7) in a complete convex metric space, under the asssumptions of convergence of iterations, we have the following.
Theorem 3.9. Let be the sequence defined by the iteration (2.9), that is
Let be the sequence defined by the iteration (2.7), that is
Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let sequences in ( 0,1 ).
Let’s suppose the following assumptions are satisfied:
(C1) ,
(C2) and .
(C3) .
Then, iteration (2.7) converges faster than iteration (2.9).
Proof. We know that and , with
.
This means that ,
Let’s denote .
Then, we have that .
From assumption (C1), we get .
From assumption (C2), we get .
This means that the limit is , i.e. . By assumption (C3), we get that the limit is greater than 1 .
Denote . Then . We get .
So we get that , which implies that .
This means that .
In conclusion, iteration (2.7) converges faster than (2.9).
For comparison of the rate of convergence of iterative processes (2.15) and (2.3) under the assumptions of convergence, we present the following theorem.
Theorem 3.10. Let be the sequence defined by the iteration (2.15), that is
Let be the sequence defined by the iteration (2.3), that is
Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let sequences in ( 0,1 ).
Let’s suppose the following assumptions are satisfied:
(C1) ,
(C2) ,
(C3) . Then, iteration 2.15 converges faster than iteration 2.3 .
Proof. We know that and , with
.
Let’s denote .
We have that .
Since , it is easy to see that .
Then, we have .
Using the assumption (C1), we get .
From assumption (C2), we get , because of assumption (C3), i.e. , since .
Since , we get that , which implies that .
This means that .
In conclusion, iteration (2.15) converges faster than iteration (2.3).
Now, if iteration algorithms (2.11) and (1.16) are convergent to the fixed of the contraction in a complete convex metric space, for comparison of their rate of convergence, we have the following.
Theorem 3.11. Let be the sequence defined by the iteration (2.11), that is
Let be the sequence defined by the iteration (1.16), that is
Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let sequences in ( 0,1 ), with .
Let’s suppose the following assumptions are satisfied:
(C1) ,
(C2) and ,
(C3) .
Then, iteration (2.11) converges faster than iteration (1.16).
Proof. We know that and , with
.
This means that .
and .
Let’s denote .
Then, .
From assumption (C1), we have that
.
From assumption (C2), we get .
From assumption (C3), since , we get .
This implies that .
So we get that , which implies that .
This means that .
In conclusion, iteration (2.11) converges faster than iteration (1.16), by definitions (1.2) and (1.3).
Finally, our last theoretical result regarding the comparison of the rate of convergence of iterations (2.3) and (1.12), we have the following theorem.
Theorem 3.12. Let be the sequence defined by the iteration (2.3), that is
Let be the sequence defined by the iteration (1.12), that is
Let be a nonempty, closed and convex subset of a complete convex metric space . Let be a -contraction. Let sequences in ( 0,1 ), with .
Let’s suppose the following assumptions are satisfied :
(C1) and ,
(C2) ,
(C3) .
Then, iteration (2.3) converges faster than iteration (1.12).
Proof. We know that and , with
.
Also, .
Let’s denote .
Then, .
Since and and using assumption (C1), we get that:
.
Assumption (C2) implies that .
From assumption (C3), we get that .
So we get that , which implies that .
This means that .
In conclusion, iteration (2.3) converges faster than iteration (1.12).
4. Numerical Examples
Throughout this section, we cover with numerical examples the points (A) and (C) from the first section. All of the examples presented below satisfy the conditions for the comparison of rate of convergence and the conditions from the convergence analysis.
Let , where , with and . Also, let’s take the first iteration and the number of iterations for each comparison of iterative processes be . Regarding Theorem 3.1, we present a numerical example for iterations (2.1) and (1.9).
Example 4.1. Let and .
Also, iteration (2.1) is
and iteration (2.1) is
Furthermore, condition is : , which is satisfied for , so it is a valid assumption.
We have that
ITERATION 1.9
ITERATION (2.1)
100.0
100.0
42.60651629
87.34375
19.98824221
77.51757813
9.95461865
69.6446991
5.16619183
63.1872579
2.76380633
57.79236367
1.51364647
53.21713488
0.84462672
49.288213597
0.47858118
45.87823358
0.27466398
42.89136941
0.1593546
40.25407114
0.09332257
37.90891209
0.05509877
35.81038302
0.03276441
33.92194486
0.01960717
32.21391854
0.01180006
30.6619459
Regarding Theorem 3.3, we present a numerical example for iterations (2.15) and (2.1):
Example 4.2. Let and .
Also, iteration (2.15) is
and iteration (2.1) is
Moreover, condition is , which is satisfied for each , so it is valid.
Now, we have that
Regarding Theorem 3.4, we present a numerical example for iterations (2.13) and (2.1):
Example 4.3. Let and .
ITERATION (2.15)
ITERATION (2.1)
100.0
100.0
4.89795918
83.68055555
0.24489796
71.54170953
0.01243926
62.2015419
0.00063973
54.81903566
0.00003323
48.85492762
0.00000174
43.94855668
0.00000009
39.85017937
36.38173783
33.41308909
30.84705120
28.60968792
26.64381755
24.90456835
23.35626846
21.97022728
Also, iteration (2.13) is
and iteration (2.1) is
Also, condition is , so the assumption is valid.
Now, we have that
Regarding Theorem 3.5, we present a numerical example for iterations (2.3) and (2.5):
Example 4.4. Let and .
Also, iteration (2.3) is
ITERATION (2.13)
ITERATION (2.1)
100.0
100.0
33.92857143
89.44444444
12.32425184
81.00256032
4.69269589
74.06975455
1.84976637
68.26101041
0.74877079
63.31617686
0.30955125
59.05170739
0.13018573
55.33372046
0.0555366
52.06192489
0.0239783
49.15952527
0.01045996
46.56662042
0.0046038
44.2357406
0.00204218
42.12874589
0.00091215
40.21461916
0.00040992
38.46786313
0.00018523
36.86731548
and iteration (2.5) is
Finally, we have
ITERATION (2.3)
ITERATION (2.5)
100.0
100.0
7.64126855
10.87411293
0.59661556
1.20823477
0.04612052
0.13291663
0.00349281
0.0143248
0.000258176
0.0015068
0.00001861
0.00015453
0.00000131
0.00001545
0.00000009
0.00000151
0.00000014
0.00000001
Regarding Theorem 3.6, we present a numerical example for iterations (2.5) and (1.16):
Example 4.5. Let and .
Also, iteration (1.16) is
and iteration (2.5) is
By comparison, we get
ITERATION (1.16)
ITERATION (2.5)
100.0
100.0
23.69791667
6.66666667
5.7023112
0.63157895
1.38399845
0.06970604
0.33776153
0.00839054
0.08274403
0.00106841
0.02032688
0.00014153
0.00500408
0.00001931
0.00123396
0.00000269
0.00030469
0.00000038
0.00007532
0.00000006
0.00001864
0.00000461
0.00000114
0.00000028
0.00000007
Regarding Theorem 3.7, we present a numerical example for iterations (2.7) and (2.5):
Example 4.6. Let and .
In our context of normed spaces, iteration (2.7) is
and iteration (2.5) is
By comparison, we get
ITERATION (2.7)
ITERATION (2.5)
100.0
100.0
18.65113029
11.62790698
3.30315188
1.35208221
0.56378435
0.15721886
0.0935548
0.01828126
0.01517991
0.00212573
0.00241797
0.00024718
0.0003792
0.00002874
0.00005868
0.00000334
0.00000898
0.00000039
0.00000136
0.00000005
0.0000002
0.00000003
5. Conclusions
In the present article, through an exhaustive approach involving iterative processes in the context of convex metric spaces, we introduced numerous iterations which converge faster to the fixed point of a single-valued mapping than some iterations found in fixed point literature. The first three iterations introduced by us through multiple convex combination were compared to iteration (2.1) using definition (1.4) as in the article of Berinde [6]. The other fixed point iterations were compared with well know iterative processes using definition (1.2) and (1.3) from two points of view : the first one consists on the fact that Agarwal et.al. [2] used this type of definitions, respectively the second point of view lies on the idea that in [1, [8] and in the other articles gave as references, Gursoy, Abbas and the rest of the authors compared iterations also with this type of definitions.
Moreover, from (Example 4.1), (Example 4.2 ) and (Example 4.3 ) the definition (1.4) is more precise for iterations given through multiple convex combinations and our new iterative processes converge faster than the iteration given by Suantai, Berinde et. al. in [6] and [14].
Finally, from (Example 4.6), we can easily observe that the definitions (1.2) and (1.3) depend on the auxiliary sequences and . So, we gave also an example of two iterations justifying the fact that definitions (1.2) and (1.3) are not very useful in practical applications.
Acknowledgments
The author thank the referees and the editors for valuable comments and suggestions that improved the article in its present form.
References
[1] M. Abbas and T. Nazir, A new faster iteration process applied to constrained minimization and feasibility problems, Matematicki Vesnik 66 (2014) 223-234.
[2] R. Agarwal, D. O’Regan and D.R. Sahu, Iterative construction of fixed points of nearly asymptotically nonexpansive mappings, Journal of Nonlinear and Convex Analysis 8 (2007) 61-79.
[3] S. Akbulut and M. Ozdemir, Picard iteration converges faster than Noor iteration for a class of quasi-contractive operators, Chiang Mai J.Sci. 39 (2012) 688-692.
[4] V. Berinde, Picard iteration converges faster than Man iteration for a class of quasi-contractive operators, Fixed Point Theory Appl. 2014 (2004):1.
[5] M.R. Bridson and A. Haefliger, Metric Spaces of Non-Positive Curvature, Springer-Verlag, Berlin, Heidelberg, New York, 1999.
[6] H. Fukhar-ud-din and V. Berinde, Iterative methods for the class of quasi-contractive type operators and comparsion of their rate of convergence in convex metric spaces, Filomat 30 (2016) 223-230.
[7] K. Goebel and W. Kirk, Iteration processes for nonexpansive mappings, Topological Methods in Nonlinear Functional Analysis, Contemporary Mathematics 21, Amer. Math. Soc. Providence (1983), 115-123.
[8] F. Gursoy and V.Karakaya, A Picard-S hybrid type iteration method for solving a differential equation with retarded argument, http://arxiv.org/abs/1403.2546
[9] H.V. Machado, A characterization of convex subsets of normed spaces, Kodai Math. Sem. Rep. 25 (1973) 307-320.
[10] R. Chugh, P.Malik and V.Kumar, On a new faster implicit fixed point iterative scheme in convex metric spaces, J. Function Spaces (2015), Article ID 905834.
[11] W. Phuengrattana and S. Suantai, Comparison of the rate of convergence of various iterative methods for the class of weak contractions in Banach Spaces, Thai J. Math. 11 (2013) 217-226.
[12] S. Reich and I. Safrir, Nonexpansive iteration in hyperbolic spaces, Nonlinear. Anal. 15 (1990) 537-558.
[13] W. Sintunavarat and A. Pitea, On a new iteration scheme for numerical reckoning fixed points of Berinde mappings with convergence analysis, J. Nonlinear Science Appl. 9 (2016) 2553-2562.
[14] S. Suantai, Weak and strong convergence criteria of Noor iterations for asymptotically nonexpansive mappings, J. Math. Anal. Appl. 311 (2005) 506-517.
[15] W. Takahashi, A convexity in metric spaces and nonexpansive mappings, Kodai Math. Sem. Rep. 22 (1970) 142-149.
[16] L.A. Talman, Fixed points for condensing multifunctions in metric spaces with convex structure, Kodai Math. Sem. Rep. 29 (1977) 62-70.
[17] T. Zamfirescu, Fixed point theorems in metric spaces, Arch. Math. (Basel), 23 (1972) 292-298.