Return to Article Details Forward-backward splitting algorithm with self-adaptive method for finite family of split minimization and fixed point problems in Hilbert spaces

Forward-backward splitting algorithm
with self-adaptive method for finite family
of split minimization and fixed point problems
in Hilbert spaces

Hammed A. Abass1, Kazeem O. Aremu2, Olewale. K. Oyewole3, Akindele A. Mebawondu4 Ojen K. Narain5

July 23, 2023; accepted: October 23, 2023; published online: December 22, 2023.

1Department of Mathematics and Applied Mathematics, Sefako Makgatho Health sciences University, P.O. Box 94, Medunsa 0204, Ga-Rankuwa, South Africa, e-mail: hammedabass548@gmail.com, Abassh@ukzn.ac.za.
2Department of Mathematics and Applied Mathematics, Sefako Makgatho Health sciences University, P.O. Box 94, Medunsa 0204, Ga-Rankuwa, South Africa, Department of Mathematics, Usmanu Danfodiyo University Sokoto, PMB 2364, Sokoto state, Nigeria e-mail: aremu.kazeem@udusok.edu.ng, aremukazeemolalekan@gmail.com.
3School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa, e-mail: oyewolelawalekazeem@gmail.com.
4Department of Computer science and Mathematics, Mountain Top University, Prayer City, Ogun state, Nigeria, School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa, e-mail: dele@aims.ac.za, aamebawondu@mtu.edu.ngm.
5School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa, e-mail: naraino@ukzn.ac.za.

In this paper, we introduce an inertial forward-backward splitting method together with a Halpern iterative algorithm for approximating a common solution of a finite family of split minimization problem involving two proper, lower semicontinuous and convex functions and fixed point problem of a nonexpansive mapping in real Hilbert spaces. Under suitable conditions, we proved that the sequence generated by our algorithm converges strongly to a solution of the aforementioned problems. The stepsizes studied in this paper are designed in such a way that they do not require the Lipschitz continuity condition on the gradient and prior knowledge of operator norm. Finally, we illustrate a numerical experiment to show the performance of the proposed method. The result discussed in this paper extends and complements many related results in literature.

MSC. 47H06, 47H09, 47J05, 47J25.

Keywords. Nonexpansive mapping, minimization problem, inertial method, forward-backward splitting method, fixed point problem.

1 Introduction

Let H be a real Hilbert space with inner product ,, the induced norm and f,g:HR{+} be two proper, lower semi-continuous and convex functions in which f is Fréchet differentiable on an open set containing the domain of g. The Convex Minimization Problem (CMP) is formulated as follows:

minxH{f(x)+g(x)}.

We denote by Υ the solution set of 1. The CMP 1 is a general form of the classical minimization problem which is given as:

f(x)=minyHf(y).

The minimization problems 12 and their other modifications are known to have notable applications in optimal control, signal processing, system identification, machine learning, and image analysis; see, e.g., [ 5 , 3 , 2 , 26 ] . It is well known that CMP 1 relates to the following fixed point equation:

x=proxβg(xβf(x)),

where β is a positive real number and proxg is the proximal operator of g the Moreau-Yosida resolvent of g in Hilbert space is defined as follows:

Jλg(x)=proxλg(x)=argminyH{g(y)+12λyx2},  xH,

where argmin g:={xH:g(x)g(x) for all xH} .

In 2012, Lin and Takahashi [ 28 ] introduced the following forward-backward algorithm:

xn+1=αnF(xn)+(1αn)proxβng(xnβnf(xn)),

where F:HH is a contraction, {αn}(0,1),{βn}(0,+), f is Lipschitz continuous and g is convex and lower-semicontinuous function. They obtained a strong convergence result of algorithm 5 under the following mild conditions:

limnαn=0, n=1|αnαn+1|<,n=1αn=,n=1|βnβn+1|<,0<aβn2L,

where L is the Lipschitz constant of f. Also, Wang and Wang [ 39 ] proposed the following forward-backward splitting method: find x0H such that

xn+1=αnF(xn)+γnxn+μnproxβng(xnβnf(xn)),

where {αn}(0,1),{μn}(0,2),{γn}(2,1) and αn+γn+μn=1 and F:HH is a contraction. They proved that the sequence 6 converges strongly to a solution of Υ.
Bello and Nghia [ 10 ] in 2016 investigated the forward-backward method using linesearch that eliminates the undesired Lipschitz assumption on the gradient of f. They proposed the following algorithm and established its weak convergence:

Algorithm 1


Initialization: Take x0domg, σ>0,θ(0,1),δ(0,12).
Iterative steps: Calculate xn and set

xn+1=proxβng(xnβnf(xn))

with the βn=Linesearch(xn,σ,θ,δ) given as:

Input: Set β=σ and J(x,β)=proxβg(xβf(x)) with xdomg
While

βf(J(x,β))f(x)>δJ(x,β)x

do β=θβ.
End While
Output β.
Stop Criteria. If xn+1=xn, then stop.

Very recently, Kunrada and Cholamjiak [ 27 ] proposed the forward-backward algorithm involving the viscosity approximation method and stepsize that does not require the Lipschitz continuity condition on the gradient as follows:

Algorithm 2


Initialization: Let F:domgdomg be a contraction. Let x0domg, σ>0,θ(0,1),δ(0,12), take x0domg and

yn=proxβng(xnβnf(xn)),

where βn=σθmn and mn is the smallest nonnegative integer such that

2βnmax{f(proxβng(ynβnf(yn)))f(yn),f(xn)f(yn)}δ((proxβng(ynβnf(yn))yn+xnyn).

Construct xn+1 by

xn+1=αnF(xn)+(1αn)proxβng(ynβnf(yn)).

They proved the strong convergence theorem for algorithm 2 under some weakened assumptions on the stepsize.

We observe that the choice of stepsizes in algorithm 1 and algorithm 2 heavily depend on the linesearches which are known to slow down the rate of convergence in iterative algorithms (see [ 25 , 34 ] ).

The Split Feasibility Problem (SFP) was first introduced in [ 16 ] by Censor and Elfving. Let C and Q be nonempty closed convex subsets of real Hilbert spaces H1 and H2 respectively and A:H1H2 be a bounded linear operator. The SFP is defined as follows:

Find xC such that AxQ.

The SFP arises in many fields in the real world, such as signal processing, medical image reconstruction, intensity modulated radiation therapy, sensor network, antenna design, immaterial science, computerized tomography, data denoising and data compression [ 7 , 12 , 11 , 15 , 17 ] . Several SFP variant for different optimization problems have been extensively studied [...]. Let C and Q be nonempty closed and convex subsets of real Hilbert spaces H1 and H2, g:H1R{+} and f:H2R{+} be two proper and lower semi-continuous convex functions. Let A:H1H2 be a bounded linear operator, then the Split Minimization Problem (SMP) is to find

xC such that x=argminxCg(x)

and such that

the point y=AxQ solves y=argminyQf(y).

Many researchers have employed different types of iterative algorithms to study SMP 7 and 8 in Hilbert and Banach spaces. For instance, Abass et al. [ 5 ] proposed a proximal type algorithm to solve SMP 7 and 8 in Hilbert spaces. They established the sequence generated from the their proposed algorithm strongly converges to the solution set of the SMP. Very recently, Abass et al. [ 3 ] introduced another proximal type algorithm to approximate solutions of systems of SMP and fixed point problems of nonexpansive mappings in Hilbert spaces. They showed that their algorithm converges to a common solution of the SMP and fixed points of the nonlinear mappings.

Constructing iterative schemes with a faster rate of convergence are usually of great interest. The inertial-type algorithm which was originated from the equation for an oscillator with damping and conservative restoring force has been an important tool employed in improving the performance of algorithms and has some nice convergence characteristics. In general, the main feature of the inertial-type algorithms is that we can use the previous iterates to construct the next one. Since the introduction of inertial-like algorithm, many authors have combined the inertial term [θn(xnxn1)] together with different kinds of iterative algorithms being Mann, Kranoselski, Halpern, Viscosity, to mention few to approximate solutions of fixed point problems and optimization problems. Most authors were able to prove weak convergence results while few proved strong convergence results.

Polyak [ 33 ] was the first author to propose the heavy ball method, Alvarez and Attouch [ 6 ] employed this to the setting of a general maximal monotone operator using the Proximal Point Algorithm (PPA), which is called the inertial PPA, and is of the form:

{yn=xn+θn(xnxn1),xn+1=(I+rnB)1yn,n>1.

They proved that if {rn} is non-decreasing and {θn}[0,1) with

n=1θnxnxn12<,

then the Algorithm 9 converges weakly to a zero of B. More precisely, condition 12 is true for θn<13. Here θn is an extrapolation factor. Also see [ 1 , 4 , 6 , 18 , 20 , 24 , 28 ] for results on inertial method.

We highlight our contributions in this paper as follows:

  • Unlike the result of [ 10 ] which proved weak convergence, we proved a strong convergence theorem for the sequence generated by our algorithm. Note that in solving optimization problems, strong convergence algorithms are more desirable than the weak convergence counterparts.

  • The stepsize used in our algorithm is chosen self-adaptively and not restricted by any Lipschitz constant. This improves the corresponding results of [ 5 , 10 , 24 ] .

  • The method of proof in our convergence analysis is simpler and different from the method of proof used by many other authors such as [ 2 , 10 , 38 , 30 ] .

  • The CMP considered in our article generalizes the one considered in [ 3 ] when f is identically zero.

  • We would like to emphasize that the main advantage of our algorithm is that it does not require the information of the Lipschitz constant of the gradient of functions which makes it more practical for computing.

Inspired by the works aforementioned and the ongoing works in this direction, we develop an inertial-Halpern forward-backward splitting method for approximating a common solution of a finite family of SMP associated with two proper, lower semicontinuous and convex functions; and fixed point problem of a nonexpansive mapping in real Hilbert spaces. Under suitable conditions, we establish that the sequence generated by our algorithm converges strongly to a solution of the aforementioned problems. The selection of the stepsizes in our algorithm do not require the Lipschitz continuity condition on the gradient and does not need the prior knowledge of operator norm. Finally, we illustrate a numerical experiment to show the performance of the proposed method. Our result extends and complements many related results in the literature.

2 Preliminaries

We state some known and useful results which will be needed in the proof of our main theorem. In the sequel, we denote strong and weak convergence by "" and "," respectively.

Definition 3

Let C be a convex subset of a vector space X and f:CR{+} be a map. Then,

  • f is convex if for each λ[0,1] and x,yC, we have

    f(λx+(1λ)y)λf(x)+(1λ)f(y),
  • f is called proper if there exists at least one xC such that

    f(x)+,
  • f is lower semi-continuous at x0C if

    f(x0)lim infxx0f(x).
Let H be a real Hilbert space, for all xH, we have
x+y2=x2+2x,y+y2,

and

x+y2x2+2y,x+y.

Lemma 4 [ 19 ]

Let H be a real Hilbert space, then for all x,yH and α(0,1), the following inequalities holds:

αx+(1α)y2=αx2+(1α)y2α(1α)xy2.2x,y=x2+y2xy2=x+y2x2y2.

Definition 5

Let H be a real Hilbert space, the subdifferential of h at x is defined by

h(x)={vH:v,yxh(y)h(x), yH}.

Lemma 6 [ 14 ]

Let H be a real Hilbert space. The subdifferential operator h is maximal monotone. Furthermore, the graph of h,Gra(h)={(x,v)H×H:vh(x)} is demiclosed, i.e., if the sequence (xn,vn)Gra(h) satisfies that {xn}nN converges weakly to x and {vn}nN converges weakly to v, then (x,v)Gra(h).

We briefly recall that the proximal operator proxg:Hdom(g) with proxg(z)=(I+g)1(z), zH, where I is the identity operator. It is well known that the proximal operator is single-valued with full domain. it is also known that

zproxβg(z)βg(proxβg(z)),  zH,β>0.

Proposition 7 [ 9 ]

Let H be a real Hilbert space and h:HR{+} be a proper, lower semicontinuous and convex function. Then, for xdom(h) and yH

h(x,yx)+h(x)h(y).

Lemma 8 [ 41 ]

Let C be a nonempty, closed and convex subset of a real Hilbert space H and T:CC be a nonexpansive mapping. Then IT is demiclosed at 0 (i.e., if {xn} converges weakly to xC and {xnTxn} converges strongly to 0, then x=Tx).

Lemma 9 [ 3 ]

Let H be a real Hilbert space and fj:H(,], j=1,2,,m be proper convex and lower semi-continuous functions. Let T:HH be a nonexpansive mapping, then for 0<λμ, we have that

F(Tj=1mJμ(j))(F(T)(j=1mF(Jλ(j)))).

Lemma 10 [ 8 , 26 ]

Let {an} be a sequence of non-negative real numbers, {γn} be a sequence of real numbers in (0,1) with conditions n=1γn= and {dn} be a sequence of real numbers. Assume that

an+1(1γn)an+γndn,   n1.

If lim supkdnk0 for every subsequence {ank} of {an} satisfying the condition:
lim supk(ankank+1)0, then limnan=0.

3 Main results

Throughout this section, we assume that

  1. H1 and H2 are real Hilbert spaces, A:H1H2 is a bounded linear operator with A. Let f,g:H1R{+} are two proper, lower semi-continuous and convex functions with domgdomf. The function f is Fréchet differentiable on an open set containing domg. The gradient f is uniformly continuous on any bounded subset of domg and maps any bounded subset of domg to a bounded subset in H1.

  2. For each j=1,2,,m, let hj:H2R{+} be proper, lower semi-continuous and convex function. Suppose S:H2H2 be a nonexpansive mapping, then we define Γ:={minzH1{f+g} and AzFix(S):Azj=1margminyH2hj(y)}.


Algorithm 11


Initialization: Let σ>0,μ(0,1),δ(0,12),0<λλn and u,x0,x1H1 be chosen arbitrary.
Iterative steps: Calculate xn+1 as follows:
Step 1: Given the iterates xn1 and xn for each n1, choose θn such that 0θnθn, where

θn={min{θ,  ϵnxnxn1},  if xnxn1;θ,   otherwise.
un=xn+θn(xnxn1),

Step 2: The stepsize

γn={(SΠj=1mproxλnhjI)Aun22A(SΠj=1mproxλnhjI)Aun2, (SΠj=1mproxλnhjI)Aun0;γ>0, otherwise.

Compute

yn=un+γnA(SΠj=1mproxλnhjI)Aun),

Step 3: Compute

wn=proxλng(ynλnf(yn)),

where λn=σμmn and mn is the smallest nonnegative integer such that

λnf(wn)f(yn)δwnyn.

Step 4: Construct xn+1 by

xn+1=βnu+(1βn)wn.

Let n:=n+1 and return to step 1.

Remark 12

We assume that {ϵn} is a positive sequence such that ϵn=(αn), which implies that limnϵnαn=0 and {αn}(0,1) satisfies the following conditions:

limnβn=0, n=1βn=.

From 14, it is easy to see that

limnθnβnxnxn1=0.

Indeed, we get that θnxnxn1ϵn for each n1, which together with limnϵnβn=0 implies that

limnθnβnxnxn1limnϵnβn=0.

Theorem 13

Let {xn} be a sequence generated by 11. Then {xn} is bounded.

Proof â–¼
Let zΓ, and SΠj=1mproxλnhjAz=Az, then we obtain from 11 that
ynz2=un+γnA(SΠj=1mproxλnhjI)Aunz2=unz2+γn2A(SΠj=1mproxλnhjI)Aun2+2γnunz,A(SΠj=1mproxλnhjI)Aun,

where

unz,A(SΠj=1mproxλnhjI)Aun==AunAz,(SΠj=1mproxλnhjI)Aun=SΠj=1mproxλnhjAunAz(SΠj=1mproxλnhjI)Aun,(SΠj=1mproxλnhjI)Aun=SΠj=1mproxλnhjAunAz,(SΠj=1mproxλnhjI)Aun(SΠj=1mproxλnhjI)Aun,(SΠj=1mproxλnhjI)Aun=SΠj=1mproxλnhjAunAz,SΠj=1mproxλnhjAunAun(SΠj=1mproxλnhjI)Aun2=12(SΠj=1mAunAz2+SΠj=1mproxλnhjAunAun2SΠj=1mproxλnhjAunAz(SΠj=1mproxλnhjI)Aun2)(SΠj=1mproxλnhjI)Aun2=12SΠj=1mproxλnhjAunAz2+12SΠj=1mproxλnhjAunAun212AunAz2(SΠj=1mproxλnhjI)Aun212AunAz212SΠj=1mproxλnhjAunAun212AunAz2=12(SΠj=1mproxλnhjI)Aun2.

On combining 21 and 22, we obtain that

ynz2unz2+γn2A(SΠj=1mproxλnhjI)Aun2γn(SΠj=1mproxλnhjI)Aun2=unz2γn[(SΠj=1mproxλnhjI)Aun2γnA(SΠj=1mproxλnhjI)Aun2]=unz212(SΠj=1mproxλnhjI)Aun4A(SΠj=1mproxλnhjI)Aun2.

Using 17, we have that

ynz2unz2.

Hence, from 11, we have

ynzunz=xn+θn(xnxn1)zxnz+θnxnxn1=xnz+βnθnβnxnxn1.

Hence, by applying the condition θnβnxnxn10, there exists a constant M1>0 such that

θnβnxnxn1M1,   n1,

and so,

ynzxnz+βnM1.

By applying 13 and 11, we observe that

ynwnλnf(yn)=ynproxλng(ynλnf(yn))λnf(yn)g(yn).

From the convexity of g, we obtain

g(x)g(wn)ynwnλnf(yn),xwn  xdom(g).

Also the convexity of f implies

f(x)f(y)f(y),xy  xdom(f),ydom(g).

On combining 28 and 29 with any xdom(g)dom(f) and y=yndom(g), we obtain

g(x)g(wn)+f(x)f(yn)ynwnλnf(yn),xwn+f(yn),xyn=1λnynwn,xwn+f(yn)f(wn),wnyn+f(wn),wnyn1λnynwn,xwnf(yn)f(wn) wnyn+f(wn),wnyn1λnynwn,xwnδλnynwn2+f(wn),wnyn,

where the last inequality follows from step 3 of 11. It then follows that

ynwn,wnxλn[f(yn)+g(wn)(f+g)(x)+f(wn),wnyn]δynwn2.

Replacing x=yn and y=wn in 27, we obtain that

f(yn)f(wn)f(wn),ynwn.

This, together with 31 yields

ynwn,wnxλn[f(yn)+g(wn)(f+g)(x)+f(wn)f(yn)]δynwn2=λn[(f+g)(wn)(f+g)(x)]δynwn2.

On using

2ynwn,wnx=ynx2ynwn2wnx2,

we obtain from 32 that

wnx2ynx22λn[(f+g)(wn)(f+g)(x)](12δ)ynwn2.

Since zΓ, then we obtain from 33, we obtain that

wnz2ynz2(12δ)ynwn2.

From 11, 26 and 34, we get

xn+1z=βu+(1βn)wnzβnuz+(1βn)wnzβnuz+(1βn)[xnz+βnM1]=(1βn)xnz+βn[uz+M1]max{xnz+M1,uz},max{x1z+M1,uz}.

Hence, the sequence {xn} is bounded. Consequently, it follows that 21-34 that the sequence {un}, {yn} and {wn} are bounded.

Proof â–¼

Lemma 14

Assume that {yn} is defined as stated in 11, then

ynz2unz2ynun2+2γnynunA(SΠj=1mproxλnhjI)Aun.

Proof â–¼
ynz2=(un+γnA(TI)Aun)z2unz,un+γnA(SΠj=1mproxλnhjI)Aunz=12[unz2+un+γnA(SΠj=1mproxλnhjI)Aunz2ynz(un+γnA(SΠj=1mproxλnhjI)Aun)z2]12[ynz2+unz2+γn(γnA(SΠj=1mproxλnhjI)Aun2(SΠj=1mproxλnhjI)Aun2)ynz(un+γnA(SΠj=1mproxλnhjI)Aunz)2]12[ynz2+unz2(ynun2+γn2A(SΠj=1mproxλnhjI)Aun22γnynun,A(SΠj=1mproxλnhjI)Aun)]12[ynz2+unz2ynun2+2γnynunA(SΠj=1mproxλnhjI)Aun].

Thus, we conclude that

ynz2unz2ynun2+2γnynun A(SΠj=1mproxλnhjI)Aun.

Proof â–¼

Theorem 15

Assume (1)–(2) holds. Then the sequence {xn} generated by 11 strongly converges to the solution xΓ, where x=PΓ(x) denotes the metric projection of H1 onto the solution set Γ.

Proof â–¼
Let xΓ, then we have from algorithm 11 that
unz2=xn+θn(xnxn1)z2=(xnz)+θn(xnxn1)2=xnz2+2θnxnz,xnxn1+θn2xnxn12xnz2+2θnxnz xnxn1+θn2xnxn1xnz2+θnxnxn1[2xnz+θnxnxn1]=xnz2+θnxnxn1M2,

for some M2>0, where M2=2xnx+θnxnxn1. Now from 11 23, 34, 25 and 26, we obtain that

wnz2xnz2+θnxnxn1M2ynun2+2γnynun A(SΠj=1mproxλnhjI)Aun(12δ)ynwn212(SΠj=1mproxλnhjI)Aun4A(SΠj=1mproxλnhjI)Aun2.

We conclude from algorithm 11 and 27, we have that

xn+1z2(1βn)xnz2+(1βn)θnxnxn1M212(1βn)(SΠj=1mproxλnhjI)Aun4A(SΠj=1mproxλnhjI)Aun2(1βn)ynun2+2γn(1βn)ynun A(SΠj=1mproxλnhjI)Aun(1βn)(12δ)ynwn2+βn(2uz,xn+1z)=(1βn)xnz2+βn[θnβnxnxn1(1βn)M2+2uz,xn+1z].

Putting dn=[θnβnxnxn1(1αn)M2+2uz,xn+1z], in view of lemma 10, we need to proof that lim supkdnk0 for every {xnkx} of {xnx} satisfying the condition

lim supk{xnkxxnk+1x}0.

To show this, suppose that {xnkx} is a subsequence of {xnx} such that 30 holds. Then

lim supk(xnkx2xnk+1x2)==lim infk((xnkxxnk+1x)(xnkx+xnk+1x))0.

From 28 and 31, we get

lim supk((1βnk)(SΠj=1mproxλnkhjI)Aunk4A(SΠj=1mproxλnkhjI)Aunk2) (1βnk)xnkx2xnk+1x2+βnk[θnkβnkxnkxnk1(1βnk)M2+2uz,xnk+1z]=lim supk(xnkx2xnk+1x2)0.

Thus,

limk(SΠj=1mproxλnkhjI)Aunk4A(SΠj=1mproxλnkhjI)Aunk2=0,

which implies that

1A(SΠj=1mproxλnkhjI)Aunk==(SΠj=1mproxλnkhjI)Aunk(SΠj=1mproxλnkhjI)AunkA(SΠj=1mproxλnkhjI)Aunk(SΠj=1mproxλnkhjI)Aunk2A(SΠj=1mproxλnkhjI)Aunk0.

Hence,

limk(SΠj=1mproxλnkhjI)Aunk=0.

Also, using following the same approach as in 32, we obtain that

limkynkunk=0.

Using 28, we get

lim supk((1βnk)(12δ)ynkwnk2)(1βnk)xnkx2xnk+1x2+βnk[θnkβnkxnkxnk1(1βnk)M2+2uz,xnk+1z]=lim supk(xnkx2xnk+1x2)0.

Thus, we obtain that

limkynkwnk=0.

From 11, we have

unkxnkβnk[θnkβnkxnkxnk1]0, as k.

From 34, 31 and 32, we obtain that

limkunkwnk=0=limkynkxnk.

More so, applying 32 and 33, we get

limkwnkxnk=0.

Using 11, we obtain that

limkxnk+1wnk=0.

Considering 35, we achieve

xnk+1xnkβnkuxnk+(1βnk)wnkxnk0, k.

Since {xnk} is bounded, there exists a subsequence {xnkj} of {xnk} which converges weakly to x. Also, from 32, 33 and 34, there exist subsequence {unkj} of {unk},{ynkj} of {ynk} and {wnkj} of {wnk} which converge weakly to x. Using 33, lemma 8, lemma 9 and the fact that A is a bounded linear operator, AxF(SΠj=1mproxλnkhj) which implies that AxF(S)F(proxλnkhj),j=1,2,m. Hence, we show that zΥ.

From the statements in 11, we get that

limjf(wnkj)f(ynkj)=0.

Since wnkj=proxλnkjg(ynkjλnkjf(ynkj)), it follows from 13 that

ynkjλnkjf(ynkj)wnnkjλnkjg(wnkj),

which implies that

xnkjwnkjλnkj+f(wnkj)f(xnkj)f(wnkj)+g(wnkj)(f+g)(wnkj).

Passing j and by applying lemma 8 and 31, we obtain that xΥ. Hence, we conclude that xΓ. Also, we show that

limkxnk+1x,f(xnk)x0.

Let z=PΓf(z), then we have from 36 that

limkxnk+1x,f(xnk)x=limjxnkjx,f(xnkj)xzx,f(z)x.

Hence, we obtain that

limkxnk+1x,f(xnk)xzx,f(z)x0.

On substituting 40 into 29 and applying lemma 10, we conclude that {xn} converges strongly to z.

Proof â–¼

4 Numerical example

In this section we give a numerical example in a m-dimensional space of real numbers to support our main result.

Example 16

Let H1=H2=Rm with the Euclidean norm. For each xH1, define f,g:H1R{+} by

f(x)=12Axb2,g(y)=12Byc2,

where A,BRm×m and b,cRm. It is easy to see that f and g are proper lower semicontinuous. Also, we know by [ 21 ] that

proxλf(x)=argminyH[f(y)+12λyx2]=(I+ATA)1(x+ATb).

Now, let A:H1H2 be defined by

A(x)=(x1.5),   x={xi}i=1m

then

AT(x)=(y1.5),   y={yi}i=1m.

For each j=1,2, xH2, let hj:H2R{+} be given by

hj(x)=12Pjxqj2,

where P1,P2Rm×m and q1,q2Rm. As before,

proxλhj(x)=(I+PjTPj)1(x+Pjqj).

Let the mapping S:H2H2 be defined by S(x)=x2. For this example, choose βn=12n+3, u=12, δ=14, μ=18, σ=32000 and ϵn=1n1.2. We choose the initial points x0,x1Rm randomly in (0,1). By using xn+1xn2104 as our stopping criterion, we conduct this example for various values of m.

Case I: m=10;
Case II: m=15;
Case III: m=20;
Case IV: m=50.

The results of this experiment are reported in figure 1.

\includegraphics[width=6.0cm]{m10.jpg} \includegraphics[width=6.0cm]{m15.jpg} \includegraphics[width=6.0cm]{m20.jpg} \includegraphics[width=6.0cm]{m50.jpg}
Figure 1 Top left: Case I, Top right: Case II,
Bottom left: Case III, Bottom right: Case IV.

.

H.A. Abass, K.O. Aremu, L.O. Jolaoso and O.T. Mewomo, An inertial forward-backward splitting method for approximating solutions of certain optimization problem, J. Nonlinear Funct. Anal., 2020 (2020), Article ID 6, https://doi.org/10.23952/jnfa.2020.6.

H.A. Abass, C. Izuchukwu, O.T. Mewomo and Q.L. Dong, Strong convergence of an inertial forward-backward splitting method for accretive operators in real Banach spaces, Fixed Point Theory, 21 (2020) no. 2, pp. 397–412, https://doi.org/10.24193/fpt-ro.2020.2.28.

H.A. Abass, C. Izuchukwu, O.T. Mewomo and F.U. Ogbuisi, An iterative method for solutions of finite families of split minimization problems and fixed point problems, Novi Sad Journal of Mathematics, 49 (2019) no. 1, pp. 117–136, https://doi.org/10.30755/NSJOM.07925.

H.A. Abass and L.O. Jolaoso, An inertial generalized viscosity approximation method for solving multiple-sets split feasibility problem and common fixed point of strictly pseudo-nonspreading mappings, Axioms, 10 (2021) no. 1, art. no. 1, https://doi.org/10.3390/axioms10010001.

M. Abbas, M. Alshahrani, Q.H. Ansari, O.S. Iyiola and Y. Shehu, Iterative methods for solving proximal split minimization problems, Numer. Algor., 78 (2018) no. 1, pp. 193–215, https://doi.org/10.1007/s11075-017-0372-3.

F. Alvarez and H. Attouch, An Inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Anal., 9 (2001), pp. 3–11, https://doi.org/10.1023/A:1011253113155.

Q.H. Ansari and A. Rehan, Split feasibility and fixed point problems. In Nonlinear Analysis: Approximation Theory, Optimization and Application, ed. Q.H. Ansari, pp. 281–322. New York, Springer, 2014, https://doi.org/10.1007/978-81-322-1883-8_9

K. Aoyama, Y. Kimura, W. Takahashi and M. Toyoda, Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space, Nonlinear Anal., 67 (2007), pp. 2350–2360, https://doi.org/10.1016/j.na.2006.08.032.

H.H. Bauschke and P.L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, 408, New York, Springer, 2011, https://doi.org/10.1007/978-3-319-48311-5.

J.Y. Bello Crus and T.T. Nghia, On the convergence of the forward-backward splitting method with line searches, Optim. Methods Softw., 31 (2016) no. 6, pp. 1209–1238, https://doi.org/10.1080/10556788.2016.1214959.

C. Bryne, Iterative oblique projection onto convex subsets and the split feasibility problem, Inverse Problems, 18 (2002) no. 2, pp. 441–453, https://iopscience.iop.org/article/10.1088/0266-5611/18/2/310/meta.

C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Problems, 20 (2004), pp. 103–120, https://iopscience.iop.org/article/10.1088/0266-5611/20/1/006/meta.

C. Bryne, Y. Censor, A. Gibali and S. Reich, The split common null point problem, J. Nonlinear Convex Analysis, 13 (2012), pp. 759–775, https://doi.org/10.48550/arXiv.1108.5953.

R.S. Burachik and A.N. Iusem, Set-valued mappings and enlargements of monotone operators, 8, Boston, Springer Science Business Media, 2007.

Y. Censor, T. Bortfield, B. Martin and A. Trofimov, A unified approach for inversion problems in intensity-modulated radiation therapy, Phys. Med. Biol., 51 (2006), pp. 2353–2365, https://doi.org/10.1088/0031-9155/51/10/001.

Y. Censor and T. Elfving, A multiprojection algorithm using Bregman projections in product space, Numer. Algor., 8 (1994), pp. 221–239, https://doi.org/10.1007/BF02142692

Y. Censor, T. Elfving, N. Kopf and T. Bortfeld, The multiple-sets split feasibility problem and its applications for inverse problems, Inverse Problems, 21 (2005), pp. 2071–2084, https://doi.org/10.1088/0266-5611/21/6/017.

S.S. Chang, J.C. Yao, L. Wang, M. Liu and L. Zhao, On the inertial forward-backward splitting technique for solving a system of inclusion problems in Hilbert spaces, Optimization, 70 (2021), pp. 1–15, https://doi.org/10.1080/02331934.2020.1786567.

C.E Chidume, Geometric properties of Banach spaces and nonlinear iterations, Springer Verlag Series, Lecture Notes in Mathematics, ISBN 978-1-84882-189-7, 2009, https://doi.org/10.1007/978-1-84882-190-3.

W. Cholamjiak, P. Cholamjiak and S. Suantai, An inertial forward-backward splitting method for solving inclusion problems in Hilbert space, J. Fixed Point Theor. Appl., (2018), pp. 20–42, https://doi.org/10.1007/s11784-018-0526-5.

P.L. Combettes and J.C. Pesquet, Proximal splitting methods in signal processing, in H.H. Bauschke, R. Burachik, P.L. Combettes, V. Elser, D.R. Wolkowicz (eds), Fixed Point Algorithms for inverse problems in science and engineering, Springer Optimization and Its Applications, 49, pp. 185–212, Springer, New York, 2011, https://doi.org/10.1007/978-1-4419-9569-8_10.

K. Goebel and S. Reich, Uniform convexity, Hyperbolic Geometry and Nonexpansive Mappings, Marcel Dekker, New York, 1984, https://doi.org/10.1112/blms/17.3.293

F.O. Isiogugu and M.O. Osilike, Convergence theorems for new classes of multivalued hemicontractive-type mappings, Fixed Point Theory and Applications, 93 (2014), https://doi.org/10.1186/1687-1812-2014-93.

L.O. Jolaoso, H.A. Abass and O.T. Mewomo, A Viscosity-Proximal Gradient Method with Inertial Extrapolation for Solving Minimization Problem in Hilbert Space, Arch. Math. (Brno), 55 (2019), pp. 167–194, https://dml.cz/handle/10338.dmlcz/147824.

C. Kanzow and Y. Shehu, Strong convergence of a double-type method for monotone variational inequalities in Hilbert spaces, J. Fixed Point Theory Appl., 20 (2018) no. 1, art. 51, pp. 1–24, https://doi.org/10.1007/s11784-018-0531-8.

Y. Kimura and S. Saejung, Strong convergence for a common fixed points of two different generalizations of cutter operators, Linear Nonlinear Anal., 1 (2015), pp. 53–65.

K. Kunrada and P. Cholamjiak, Strong convergence of the forward-backward splitting algorithms via linesearches in Hilbert spaces, Applicable Analysis, (2021), pp.  1–20, https://doi.org/10.1080/00036811.2021.1986021.

L.J. Lin and W. Takahashi, A general iterative method for hierachical varaitional inequality problems in Hilbert spaces and applications, Positivity, (2012), 16 (2012) no. 3, pp. 429–453, https://doi.org/10.1007/s11117-012-0161-0.

D.A. Lorenz and T. Pock, An inertial forward-backward splitting algorithm fpr monotone inclusions, J. Math. Imaging Vis., 51 (2015), pp. 311–325, https://doi.org/10.1007/s10851-014-0523-2.

A. Moudafi, Split monotone variational inclusions, J. Optim. Theory Appl., 150 (2011), pp. 275–288, https://doi.org/10.1007/s10957-011-9814-6.

P.E. Mainge, Inertial iterative process for fixed points of certain quasi-nonexpansive mappings, Set-Valued Anal., 15 (2007), pp. 67–79, https://doi.org/10.1007/s11228-006-0027-3.

W. Phuengrattana and J. Tiammee, Proximal point algorithms for finding common fixed points of a finite family of quasi-nonexpansive multi-valued mappings in real Hilbert spaces, J. Fixed Point Theory Appl., 20 (2018), pp. 1–14, https://doi.org/10.1007/s11784-018-0590-x.

B.T. Polyak, Some methods of speeding up the convergence of iterates methods, U.S.S.R Comput. Math. Phys., 4 (1964) no. 5, pp. 1–17, https://doi.org/10.1016/0041-5553(64)90137-5.

Y. Shehu and O.S. Iyiola, On a modified extragradient method for variational inequality problem with application to industrial electricity production, J. Ind. Manag. Optim., 15 (2019) no. 1, pp. 319–342, https://doi.org/10.3934/jimo.2018045.

Y. Shehu and F.U. Ogbuisi, An iterative method for solving split monotone variational inclusion and fixed point problems, Revista de la Real Academia de Ciencias Exactas, Fiscas y Naturales, Serie A Matemaicas, 110 (2016) no. 2, pp. 503–518, https://doi.org/10.1007/s13398-015-0245-3.

W. Takahashi, Nonlinear Functional Analysis, Yokohama Publishers, Yokohama, 2000.

W. Takahashi, Introduction to nonlinear and convex analysis, Yokohama Publisher, Yokohama, 2009.

D.V. Thong and D.V. Hieu, A new approximation method for finding common fixed points of families of demicontractive operators and applications, 20 (2018), pp. 1–27, https://doi.org/10.1007/s11784-018-0551-4.

Y. Wang and F. Wang, Strong convergence of the forward-backward splitting method with multiple parameters in Hilbert spaces, Optimization, 67 (2018), no. 4, pp. 493–505, https://doi.org/10.1080/02331934.2017.1411485.

H.K. Xu, Averaged mappings and the gradient-projection algorithm, J. Optim. Theory Appl., 150 (2011), pp. 360–-378, https://doi.org/10.1007/s10957-011-9837-z.

H.Y. Zhou, Convergence theorems of fixed points for strict pseudo-contractions in Hilbert spaces, Nonlinear Anal., 69 (2008) no. 2, pp. 456–462, https://doi.org/10.1016/j.na.2007.05.032.