Forward-backward splitting algorithm
with self-adaptive method for finite family
of split minimization and fixed point problems
in Hilbert spaces

Hammed A. Abass\(^1\), Kazeem O. Aremu\(^2\), Olewale. K. Oyewole\(^3\), Akindele A. Mebawondu\(^4\) Ojen K. Narain\(^5\)

July 23, 2023; accepted: October 23, 2023; published online: December 22, 2023.

\(^1\)Department of Mathematics and Applied Mathematics, Sefako Makgatho Health sciences University, P.O. Box 94, Medunsa 0204, Ga-Rankuwa, South Africa, e-mail: hammedabass548@gmail.com, Abassh@ukzn.ac.za.
\(^2\)Department of Mathematics and Applied Mathematics, Sefako Makgatho Health sciences University, P.O. Box 94, Medunsa 0204, Ga-Rankuwa, South Africa, Department of Mathematics, Usmanu Danfodiyo University Sokoto, PMB 2364, Sokoto state, Nigeria e-mail: aremu.kazeem@udusok.edu.ng, aremukazeemolalekan@gmail.com.
\(^3\)School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa, e-mail: oyewolelawalekazeem@gmail.com.
\(^4\)Department of Computer science and Mathematics, Mountain Top University, Prayer City, Ogun state, Nigeria, School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa, e-mail: dele@aims.ac.za, aamebawondu@mtu.edu.ngm.
\(^5\)School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa, e-mail: naraino@ukzn.ac.za.

In this paper, we introduce an inertial forward-backward splitting method together with a Halpern iterative algorithm for approximating a common solution of a finite family of split minimization problem involving two proper, lower semicontinuous and convex functions and fixed point problem of a nonexpansive mapping in real Hilbert spaces. Under suitable conditions, we proved that the sequence generated by our algorithm converges strongly to a solution of the aforementioned problems. The stepsizes studied in this paper are designed in such a way that they do not require the Lipschitz continuity condition on the gradient and prior knowledge of operator norm. Finally, we illustrate a numerical experiment to show the performance of the proposed method. The result discussed in this paper extends and complements many related results in literature.

MSC. 47H06, 47H09, 47J05, 47J25.

Keywords. Nonexpansive mapping, minimization problem, inertial method, forward-backward splitting method, fixed point problem.

1 Introduction

Let \(H\) be a real Hilbert space with inner product \(\langle \cdot ,\cdot \rangle \), the induced norm \(\| \cdot \| \) and \(f,g:H \rightarrow \mathbb {R} \cup \{ +\infty \} \) be two proper, lower semi-continuous and convex functions in which \(f\) is Fréchet differentiable on an open set containing the domain of \(g.\) The Convex Minimization Problem (CMP) is formulated as follows:

\begin{align} \label{aa} \min _{x \in H}\{ f(x)+ g(x)\} . \end{align}

We denote by \(\Upsilon \) the solution set of 1. The CMP 1 is a general form of the classical minimization problem which is given as:

\begin{align} \label{ab} f(x)=\min _{y \in H}f(y). \end{align}

The minimization problems 12 and their other modifications are known to have notable applications in optimal control, signal processing, system identification, machine learning, and image analysis; see, e.g., [ 5 , 3 , 2 , 26 ] . It is well known that CMP 1 relates to the following fixed point equation:

\begin{align} x=\operatorname {prox}_{\beta g}(x-\beta \nabla f(x)), \end{align}

where \(\beta \) is a positive real number and \(\operatorname {prox}_{g}\) is the proximal operator of \(g\) the Moreau-Yosida resolvent of \(g\) in Hilbert space is defined as follows:

\begin{align} J_{\lambda }^{g}(x)= \operatorname {prox}_{\lambda }g(x)= \operatorname {argmin}_{y \in H} \Big\{ g(y) + \tfrac {1}{2 \lambda }\| y-x\| ^2 \Big\} ,~ \forall ~ x \in H, \end{align}

where \(\operatorname {argmin}~ g:= \Big\{ \overline{x} \in H: g(\overline{x}) \leq g(x) ~ for~ all~ x \in H \Big\} \) .

In 2012, Lin and Takahashi [ 28 ] introduced the following forward-backward algorithm:

\begin{align} \label{ac} x_{n+1}=\alpha _n F(x_n) + (1-\alpha _n)\operatorname {prox}_{\beta _n g}(x_n-\beta _n \nabla f(x_n)), \end{align}

where \(F:H \rightarrow H\) is a contraction, \(\{ \alpha _n\} \subset (0,1), \{ \beta _n\} \subset (0, +\infty )\), \(\nabla f\) is Lipschitz continuous and g is convex and lower-semicontinuous function. They obtained a strong convergence result of algorithm 5 under the following mild conditions:

\begin{align*} \lim _{n\rightarrow \infty }\alpha _n& =0,~ \sum _{n=1}^{\infty }|\alpha _n-\alpha _{n+1}|{\lt} \infty ,\\ \sum _{n=1}^{\infty }\alpha _n& =\infty , \sum _{n=1}^{\infty }|\beta _n-\beta _{n+1}|{\lt} \infty , \quad 0{\lt}a\leq \beta _n \leq \tfrac {2}{L}, \end{align*}

where \(L\) is the Lipschitz constant of \(\nabla f\). Also, Wang and Wang [ 39 ] proposed the following forward-backward splitting method: find \(x_0 \in H\) such that

\begin{align} \label{ae} x_{n+1}=\alpha _n F(x_n) + \gamma _{n} x_n + \mu _n \operatorname {prox}_{\beta _n g}(x_n-\beta _n \nabla f(x_n)), \end{align}

where \(\{ \alpha _n\} \subset (0,1), \{ \mu _n\} \subset (0,2), \{ \gamma _n\} \subset (-2,1)\) and \(\alpha _n + \gamma _n + \mu _n=1\) and \(F:H \rightarrow H\) is a contraction. They proved that the sequence 6 converges strongly to a solution of \(\Upsilon .\)
Bello and Nghia [ 10 ] in 2016 investigated the forward-backward method using linesearch that eliminates the undesired Lipschitz assumption on the gradient of \(f.\) They proposed the following algorithm and established its weak convergence:

Algorithm 1


Initialization: Take \(x_0\in \operatorname {dom}g,\) \(\sigma {\gt}0, \theta \in (0,1), \delta \in (0,\frac{1}{2}).\)
Iterative steps: Calculate \(x_{n}\) and set

\begin{align*} x_{n+1}=\operatorname {prox}_{\beta _n g}(x_n-\beta _n\nabla f(x_n)) \end{align*}

with the \(\beta _n=\mbox{Linesearch}(x_n,\sigma ,\theta ,\delta )\) given as:

Input: Set \(\beta =\sigma \) and \(J(x,\beta )=\operatorname {prox}_{\beta g}(x-\beta \nabla f(x))\) with \(x\in \operatorname {dom}g\)
While

\begin{align*} \beta \| \nabla f(J(x,\beta ))-\nabla f(x)\| {\gt}\delta \| J(x,\beta )-x\| \end{align*}

do \(\beta =\theta \beta .\)
End While
Output \(\beta .\)
Stop Criteria. If \(x_{n+1}=x_n,\) then stop.

Very recently, Kunrada and Cholamjiak [ 27 ] proposed the forward-backward algorithm involving the viscosity approximation method and stepsize that does not require the Lipschitz continuity condition on the gradient as follows:

Algorithm 2


Initialization: Let \(F:\operatorname {dom}g\to \operatorname {dom}g\) be a contraction. Let \(x_0\in \operatorname {dom}g,\) \(\sigma {\gt}0, \theta \in (0,1), \delta \in (0,\frac{1}{2}),\) take \(x_0\in \operatorname {dom}g\) and

\begin{align*} y_{n}=\operatorname {prox}_{\beta _n g}(x_n-\beta _n\nabla f(x_n)), \end{align*}

where \(\beta _n=\sigma \theta _{m_n}\) and \(m_n\) is the smallest nonnegative integer such that

\begin{align*} 2\beta _n\max & \Big\{ \| \nabla f(\operatorname {prox}_{\beta _n g}(y_n-\beta _n\nabla f(y_n)))-\nabla f(y_n)\| , \| \nabla f(x_n)-\nabla f(y_n)\| \Big\} \leq \\ & \leq \delta \big( \| (\operatorname {prox}_{\beta _n g}(y_n-\beta _n\nabla f(y_n))-y_n\| +\| x_n-y_n\| \big). \end{align*}

Construct \(x_{n+1}\) by

\begin{align*} x_{n+1}=\alpha _n F(x_n) + (1-\alpha _n)\operatorname {prox}_{\beta _n g}(y_n-\beta _n\nabla f(y_n)). \end{align*}

They proved the strong convergence theorem for algorithm 2 under some weakened assumptions on the stepsize.

We observe that the choice of stepsizes in algorithm 1 and algorithm 2 heavily depend on the linesearches which are known to slow down the rate of convergence in iterative algorithms (see [ 25 , 34 ] ).

The Split Feasibility Problem (SFP) was first introduced in [ 16 ] by Censor and Elfving. Let \(C\) and \(Q\) be nonempty closed convex subsets of real Hilbert spaces \(H_1\) and \(H_2\) respectively and \(A:H_1\to H_2\) be a bounded linear operator. The SFP is defined as follows:

\[ \mbox{Find}~ x^*\in C~ \mbox{such that}~ Ax^*\in Q. \]

The SFP arises in many fields in the real world, such as signal processing, medical image reconstruction, intensity modulated radiation therapy, sensor network, antenna design, immaterial science, computerized tomography, data denoising and data compression [ 7 , 12 , 11 , 15 , 17 ] . Several SFP variant for different optimization problems have been extensively studied [...]. Let \(C\) and \(Q\) be nonempty closed and convex subsets of real Hilbert spaces \(H_1\) and \(H_2,\) \(g:H_1\to \mathbb {R}\cup \{ +\infty \} \) and \(f:H_2\to \mathbb {R}\cup \{ +\infty \} \) be two proper and lower semi-continuous convex functions. Let \(A:H_1\to H_2\) be a bounded linear operator, then the Split Minimization Problem (SMP) is to find

\begin{align} \label{spt1} x^*\in C~ \mbox{such that}~ x^*=\operatorname {argmin}\limits _{x\in C}g(x) \end{align}

and such that

\begin{align} \label{spt2} \mbox{the point}~ y^*=Ax^*\in Q~ \mbox{solves}~ y^*=\operatorname {argmin}\limits _{y\in Q}f(y). \end{align}

Many researchers have employed different types of iterative algorithms to study SMP 7 and 8 in Hilbert and Banach spaces. For instance, Abass et al. [ 5 ] proposed a proximal type algorithm to solve SMP 7 and 8 in Hilbert spaces. They established the sequence generated from the their proposed algorithm strongly converges to the solution set of the SMP. Very recently, Abass et al. [ 3 ] introduced another proximal type algorithm to approximate solutions of systems of SMP and fixed point problems of nonexpansive mappings in Hilbert spaces. They showed that their algorithm converges to a common solution of the SMP and fixed points of the nonlinear mappings.

Constructing iterative schemes with a faster rate of convergence are usually of great interest. The inertial-type algorithm which was originated from the equation for an oscillator with damping and conservative restoring force has been an important tool employed in improving the performance of algorithms and has some nice convergence characteristics. In general, the main feature of the inertial-type algorithms is that we can use the previous iterates to construct the next one. Since the introduction of inertial-like algorithm, many authors have combined the inertial term \([\theta _n(x_n - x_{n-1})]\) together with different kinds of iterative algorithms being Mann, Kranoselski, Halpern, Viscosity, to mention few to approximate solutions of fixed point problems and optimization problems. Most authors were able to prove weak convergence results while few proved strong convergence results.

Polyak [ 33 ] was the first author to propose the heavy ball method, Alvarez and Attouch [ 6 ] employed this to the setting of a general maximal monotone operator using the Proximal Point Algorithm (PPA), which is called the inertial PPA, and is of the form:

\begin{align} \label{1.8} \begin{cases} y_n=x_n + \theta _n(x_n-x_{n-1}),\\ x_{n+1}=(I+ r_n B)^{-1}y_n, n {\gt} 1. \end{cases}\end{align}

They proved that if \(\{ r_n\} \) is non-decreasing and \(\{ \theta _n\} \subset [0,1)\) with

\begin{align} \label{1.9} \sum _{n=1}^{\infty }\theta _n \| x_n-x_{n-1}\| ^2 {\lt} \infty , \end{align}

then the Algorithm 9 converges weakly to a zero of \(B\). More precisely, condition 12 is true for \(\theta _n {\lt} \frac{1}{3}\). Here \(\theta _n\) is an extrapolation factor. Also see [ 1 , 4 , 6 , 18 , 20 , 24 , 28 ] for results on inertial method.

We highlight our contributions in this paper as follows:

  • Unlike the result of [ 10 ] which proved weak convergence, we proved a strong convergence theorem for the sequence generated by our algorithm. Note that in solving optimization problems, strong convergence algorithms are more desirable than the weak convergence counterparts.

  • The stepsize used in our algorithm is chosen self-adaptively and not restricted by any Lipschitz constant. This improves the corresponding results of [ 5 , 10 , 24 ] .

  • The method of proof in our convergence analysis is simpler and different from the method of proof used by many other authors such as [ 2 , 10 , 38 , 30 ] .

  • The CMP considered in our article generalizes the one considered in [ 3 ] when \(f\) is identically zero.

  • We would like to emphasize that the main advantage of our algorithm is that it does not require the information of the Lipschitz constant of the gradient of functions which makes it more practical for computing.

Inspired by the works aforementioned and the ongoing works in this direction, we develop an inertial-Halpern forward-backward splitting method for approximating a common solution of a finite family of SMP associated with two proper, lower semicontinuous and convex functions; and fixed point problem of a nonexpansive mapping in real Hilbert spaces. Under suitable conditions, we establish that the sequence generated by our algorithm converges strongly to a solution of the aforementioned problems. The selection of the stepsizes in our algorithm do not require the Lipschitz continuity condition on the gradient and does not need the prior knowledge of operator norm. Finally, we illustrate a numerical experiment to show the performance of the proposed method. Our result extends and complements many related results in the literature.

2 Preliminaries

We state some known and useful results which will be needed in the proof of our main theorem. In the sequel, we denote strong and weak convergence by "\(\rightarrow \)" and "\(\rightharpoonup ,\)" respectively.

Definition 3

Let \(C\) be a convex subset of a vector space \(X\) and \(f:C\to \mathbb {R}\cup \{ +\infty \} \) be a map. Then,

  • \(f\) is convex if for each \(\lambda \in [0,1]\) and \(x,y\in C,\) we have

    \begin{align*} f(\lambda x + (1-\lambda )y)\leq \lambda f(x) + (1-\lambda )f(y), \end{align*}
  • \(f\) is called proper if there exists at least one \(x\in C\) such that

    \begin{align*} f(x)\neq +\infty , \end{align*}
  • \(f\) is lower semi-continuous at \(x_0\in C\) if

    \begin{align*} f(x_0)\leq \liminf \limits _{x\to x_0}f(x). \end{align*}
Let \(H\) be a real Hilbert space, for all \(x \in H,\) we have
\begin{align*} \| x+y\| ^2 & =\| x\| ^2 + 2\langle x,y\rangle + \| y\| ^2, \end{align*}

and

\begin{align*} \| x+y\| ^2 \leq \| x\| ^2 + 2\langle y, x+y\rangle . \end{align*}

Lemma 4 [ 19 ]

Let \(H\) be a real Hilbert space, then for all \(x, y \in H\) and \(\alpha \in (0,1),\) the following inequalities holds:

\begin{align*} \| \alpha x+ (1-\alpha )y\| ^2& =\alpha \| x\| ^2+ (1-\alpha )\| y\| ^2-\alpha (1-\alpha )\| x-y\| ^2.\\ 2\langle x, y\rangle & =\| x\| ^2+ \| y\| ^2-\| x-y\| ^2=\| x+y\| ^2-\| x\| ^2-\| y\| ^2. \end{align*}

Definition 5

Let \(H\) be a real Hilbert space, the subdifferential of \(h\) at \(x\) is defined by

\begin{align*} \partial h(x)=\Big\{ v \in H: \langle v, y-x\rangle \leq h(y)-h(x),~ y \in H\Big\} . \end{align*}

Lemma 6 [ 14 ]

Let \(H\) be a real Hilbert space. The subdifferential operator \(\partial h\) is maximal monotone. Furthermore, the graph of \(\partial h, \operatorname {Gra}(\partial h)=\{ (x,v) \in H \times H: v \in \partial h(x)\} \) is demiclosed, i.e., if the sequence \((x_n, v_n) \subset \operatorname {Gra}(\partial h)\) satisfies that \(\{ x_n\} _{n \in \mathbb {N}}\) converges weakly to \(x\) and \(\{ v_n\} _{n \in \mathbb {N}}\) converges weakly to \(v\), then \((x,v) \in \operatorname {Gra}(\partial h).\)

We briefly recall that the proximal operator \(\operatorname {prox}_{g}:H \rightarrow \operatorname {dom} (g)\) with \(\operatorname {prox}_{g}(z)=(I+ \partial g)^{-1}(z),~ z \in H,\) where \(I\) is the identity operator. It is well known that the proximal operator is single-valued with full domain. it is also known that

\begin{align} \label{AA} \tfrac {z-\operatorname {prox}_{\beta g}(z)}{\beta } \in \partial g(\operatorname {prox}_{\beta g}(z)),~ \forall ~ z \in H, \beta {\gt}0. \end{align}

Proposition 7 [ 9 ]

Let \(H\) be a real Hilbert space and \(h:H \rightarrow \mathbb {R} \cup \{ +\infty \} \) be a proper, lower semicontinuous and convex function. Then, for \(x \in \operatorname {dom} (h)\) and \(y \in H\)

\begin{align*} h’(x, y-x) + h(x) \leq h(y). \end{align*}

Lemma 8 [ 41 ]

Let \(C\) be a nonempty, closed and convex subset of a real Hilbert space \(H\) and \( T: C \rightarrow C\) be a nonexpansive mapping. Then \(I-T\) is demiclosed at \(0\) (i.e., if \(\{ x_n\} \) converges weakly to \(x\in C\) and \(\{ x_n-Tx_n\} \) converges strongly to \(0\), then \(x=Tx\)).

Lemma 9 [ 3 ]

Let \(H\) be a real Hilbert space and \(f_j:H \rightarrow (- \infty , \infty ],~ j=1,2,\dots , m\) be proper convex and lower semi-continuous functions. Let \(T:H\to H\) be a nonexpansive mapping, then for \(0{\lt}\lambda \leq \mu \), we have that

\begin{align*} F\left(T {\prod _{j=1}^{m}} J_{\mu }^{(j)}\right)\subseteq \left(F(T)\cap \left(\bigcap \limits _{j=1}^m F\left(J_{\lambda }^{(j)}\right)\right)\right). \end{align*}

Lemma 10 [ 8 , 26 ]

Let \(\{ a_n\} \) be a sequence of non-negative real numbers, \(\{ \gamma _n\} \) be a sequence of real numbers in \((0,1)\) with conditions \(\sum \limits _{n=1}^{\infty }\gamma _n=\infty \) and \(\{ d_n\} \) be a sequence of real numbers. Assume that

\begin{align*} a_{n+1}\leq (1-\gamma _n)a_n+\gamma _nd_n,~ ~ ~ n\geq 1. \end{align*}

If \(\limsup \limits _{k \rightarrow \infty }d_{n_k}\leq 0\) for every subsequence \(\{ a_{n_k}\} \) of \(\{ a_n\} \) satisfying the condition:
\(\limsup \limits _{k \rightarrow \infty }(a_{n_k}-a_{n_{k}+1})\leq 0,\) then \(\lim \limits _{n \rightarrow \infty }a_n=0.\)

3 Main results

Throughout this section, we assume that

  1. \(H_1\) and \(H_2\) are real Hilbert spaces, \(A:H_1 \rightarrow H_2\) is a bounded linear operator with \(A\neq \emptyset .\) Let \(f,g:H_1 \rightarrow \mathbb {R} \cup \{ +\infty \} \) are two proper, lower semi-continuous and convex functions with \(\operatorname {dom} g \subseteq \operatorname {dom} f\). The function \(f\) is Fréchet differentiable on an open set containing \(\operatorname {dom} g\). The gradient \(\nabla f\) is uniformly continuous on any bounded subset of \(\operatorname {dom} g\) and maps any bounded subset of \(\operatorname {dom} g\) to a bounded subset in \(H_1\).

  2. For each \(j=1,2, \cdots ,m,\) let \(h_j:H_2 \rightarrow \mathbb {R} \cup \{ +\infty \} \) be proper, lower semi-continuous and convex function. Suppose \(S:H_2 \rightarrow H_2\) be a nonexpansive mapping, then we define \(\Gamma :=\Big\{ \min \limits _{z \in H_1}\{ f+g\} ~ \text{and}~ Az \in \operatorname {Fix}(S): Az\in \bigcap \limits _{j=1}^{m}\operatorname {argmin}_{y\in H_2}h_j(y)\Big\} \neq \emptyset .\)


Algorithm 11


Initialization: Let \(\sigma {\gt}0, \mu \in (0,1), \delta \in (0,\frac{1}{2}), 0{\lt} \lambda \leq \lambda _n\) and \(u,x_0, x_1 \in H_1\) be chosen arbitrary.
Iterative steps: Calculate \(x_{n+1}\) as follows:
Step 1: Given the iterates \(x_{n-1}\) and \(x_n\) for each \(n \geq 1\), choose \(\theta _n\) such that \(0 \leq \theta _n \leq \overline{\theta _n}\), where

\begin{align} \label{Inert} \overline{\theta _n}= \begin{cases} \min \Big\{ \theta , ~ ~ \frac{\epsilon _n}{\| x_n-x_{n-1}\| }\Big\} ,~ ~ & \text{if}~ x_n \neq x_{n-1};\\ \theta ,~ ~ ~ & \text{otherwise}. \end{cases}\end{align}
\begin{align*} u_n=x_n + \theta _n(x_n-x_{n-1}), \end{align*}

Step 2: The stepsize

\begin{align} \label{Size} \gamma _{n}= \begin{cases} \tfrac {\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2}{2\| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2},~ & (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n \neq 0;\\ \gamma {\gt} 0,~ & \text{otherwise}. \end{cases}\end{align}

Compute

\begin{align*} y_n=u_n + \gamma _n A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n), \end{align*}

Step 3: Compute

\begin{align} \label{3.3c} w_n=\operatorname {prox}_{\lambda _n g}(y_n-\lambda _n \nabla f(y_n)), \end{align}

where \(\lambda _n=\sigma \mu ^{m_n} \) and \(m_n\) is the smallest nonnegative integer such that

\begin{align*} \lambda _n\| \nabla f(w_n)- \nabla f(y_n)\| \leq \delta \| w_n-y_n\| . \end{align*}

Step 4: Construct \(x_{n+1}\) by

\begin{align*} x_{n+1}=\beta _n u + (1-\beta _n)w_n. \end{align*}

Let \(n:=n+1\) and return to step 1.

Remark 12

We assume that \(\{ \epsilon _n\} \) is a positive sequence such that \(\epsilon _n= \circ (\alpha _n)\), which implies that \(\lim _{n\rightarrow \infty } \frac{\epsilon _n}{\alpha _n}=0\) and \(\{ \alpha _n\} \subset (0,1)\) satisfies the following conditions:

\begin{align*} \lim _{n\rightarrow \infty } \beta _n=0,~ \sum _{n=1}^{\infty }\beta _n=\infty . \end{align*}

From 14, it is easy to see that

\begin{align*} \lim _{n\rightarrow \infty }\tfrac {\theta _n}{\beta _n}\| x_n-x_{n-1}\| =0. \end{align*}

Indeed, we get that \(\theta _n\| x_n-x_{n-1}\| \leq \epsilon _n\) for each \(n \geq 1\), which together with \(\lim _{n\rightarrow \infty }\frac{\epsilon _n}{\beta _n}=0\) implies that

\begin{align*} \lim _{n\rightarrow \infty }\tfrac {\theta _n}{\beta _n}\| x_n-x_{n-1}\| \leq \lim _{n\rightarrow \infty }\tfrac {\epsilon _n}{\beta _n}=0. \end{align*}

Theorem 13

Let \(\{ x_n\} \) be a sequence generated by 11. Then \(\{ x_n\} \) is bounded.

Proof â–¼
Let \(z \in \Gamma ,\) and \(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}Az=Az\), then we obtain from 11 that
\begin{align} \label{3.4} \| y_n-z\| ^2 & =\| u_n + \gamma _n A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n-z\| ^2\nonumber \\ & =\| u_n-z\| ^2 + \gamma _n^2\| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2 \nonumber \\ & \quad + 2 \gamma _n \langle u_n-z, A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\rangle , \end{align}

where

\begin{align} \label{3.5} & \Big\langle u_n-z, A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\Big\rangle = \nonumber \\ & = \Big\langle Au_n-Az, (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\Big\rangle \nonumber \\ & =\Big\langle S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}Au_n-Az\nonumber \\ & \quad -(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n, (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\Big\rangle \nonumber \\ & =\Big\langle S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}Au_n-Az,(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\Big\rangle \nonumber \\ & \quad - \Big\langle (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n, (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\Big\rangle \nonumber \\ & =\Big\langle S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}Au_n-Az, S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}Au_n-Au_n\Big\rangle \nonumber \\ & \quad -\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2\nonumber \\ & =\tfrac {1}{2}\bigg(\| S\Pi _{j=1}^{m}Au_n-Az\| ^2+\| S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}Au_n-Au_n\| ^2\nonumber \\ & \quad -\| S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}Au_n-Az-(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2\bigg)\nonumber \\ & \quad -\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2\nonumber \\ & =\tfrac {1}{2}\| S\Pi _{j=1}^{m} \operatorname {prox}_{\lambda _n h_j}Au_n-Az\| ^2+\tfrac {1}{2}\| S\Pi _{j=1}^{m} \operatorname {prox}_{\lambda _n h_j}Au_n-Au_n\| ^2\nonumber \\ & \quad -\tfrac {1}{2}\| Au_n-Az\| ^2-\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2\nonumber \\ & \leq \tfrac {1}{2}\| Au_n-Az\| ^2-\tfrac {1}{2}\| S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}Au_n-Au_n\| ^2-\tfrac {1}{2}\| Au_n-Az\| ^2\nonumber \\ & =\tfrac {-1}{2}\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2. \end{align}

On combining 21 and 22, we obtain that

\begin{align} \label{3.6} \| y_n-z\| ^2 & \leq \| u_n-z\| ^2 + \gamma _n^2\| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2 \nonumber \\ & \quad -\gamma _n\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2\nonumber \\ & =\| u_n-z\| ^2 -\gamma _n\big[\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2\nonumber \\ & \quad -\gamma _n\| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2]\nonumber \\ & =\| u_n-z\| ^2 -\tfrac {1}{2}\tfrac {\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^4}{\| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2}. \end{align}

Using 17, we have that

\begin{align} \label{3.7} \| y_n-z\| ^2 \leq \| u_n-z\| ^2. \end{align}

Hence, from 11, we have

\begin{align} \label{3.8} \| y_n-z\| & \leq \| u_n-z\| \nonumber \\ & =\| x_n + \theta _n(x_n-x_{n-1})-z\| \nonumber \\ & \leq \| x_n-z\| + \theta _n\| x_n-x_{n-1}\| \nonumber \\ & =\| x_n-z\| + \beta _n \cdot \tfrac {\theta _n}{\beta _n}\| x_n-x_{n-1}\| . \end{align}

Hence, by applying the condition \(\frac{\theta _n}{\beta _n}\| x_n-x_{n-1}\| \rightarrow 0\), there exists a constant \(M_1 {\gt} 0\) such that

\begin{equation*} \tfrac {\theta _n}{\beta _n}\| x_n-x_{n-1}\| \leq M_1,~ ~ \forall ~ n \geq 1, \end{equation*}

and so,

\begin{align} \label{3.12} \| y_n-z\| \leq \| x_n-z\| + \beta _nM_1. \end{align}

By applying 13 and 11, we observe that

\begin{align} \label{3.10c} \tfrac {y_n-w_n}{\lambda _n}-\nabla f(y_n)=\tfrac {y_n-\operatorname {prox}_{\lambda _n g}(y_n-\lambda _n \nabla f(y_n))}{\lambda _n}-\nabla f(y_n)\in \partial g(y_n). \end{align}

From the convexity of g, we obtain

\begin{align} \label{3.11c} g(x)-g(w_n) \geq \Big\langle \tfrac {y_n-w_n}{\lambda _n}-\nabla f(y_n), x-w_n\Big\rangle ~ \forall ~ x \in \operatorname {dom} (g). \end{align}

Also the convexity of \(f\) implies

\begin{align} \label{3.12c} f(x)-f(y) \geq \big\langle \nabla f(y), x-y\big\rangle ~ \forall ~ x \in \operatorname {dom} (f), y \in \operatorname {dom} (g). \end{align}

On combining 28 and 29 with any \(x \in \operatorname {dom}(g) \subseteq \operatorname {dom} (f)\) and \(y=y_n \in \operatorname {dom} (g),\) we obtain

\begin{align} \label{3.13c} & g(x)-g(w_n) + f(x)-f(y_n) \geq \nonumber \\ & \geq \big\langle \tfrac {y_n-w_n}{\lambda _n}-\nabla f(y_n), x-w_n \big\rangle + \big\langle \nabla f(y_n), x-y_n \big\rangle \nonumber \\ & =\tfrac {1}{\lambda _n}\langle y_n-w_n, x-w_n\rangle + \langle \nabla f(y_n)-\nabla f(w_n), w_n-y_n\rangle + \langle \nabla f(w_n), w_n-y_n\rangle \nonumber \\ & \geq \tfrac {1}{\lambda _n}\langle y_n-w_n, x-w_n\rangle -\| \nabla f(y_n)-\nabla f(w_n)\| ~ \| w_n-y_n\| + \langle \nabla f(w_n), w_n-y_n\rangle \nonumber \\ & \geq \tfrac {1}{\lambda _n}\langle y_n-w_n, x-w_n\rangle -\tfrac {\delta }{\lambda _n}\| y_n-w_n\| ^2 + \langle \nabla f(w_n), w_n-y_n\rangle , \end{align}

where the last inequality follows from step 3 of 11. It then follows that

\begin{align} \label{3.14c} & \langle y_n-w_n, w_n-x\rangle \geq \nonumber \\ & \geq \lambda _n \big[f(y_n)+ g(w_n)-(f+g)(x) + \langle \nabla f(w_n), w_n-y_n\rangle \big]-\delta \| y_n-w_n\| ^2. \end{align}

Replacing \(x=y_n\) and \(y=w_n\) in 27, we obtain that

\begin{align*} f(y_n)-f(w_n) \geq \langle \nabla f(w_n), y_n-w_n\rangle . \end{align*}

This, together with 31 yields

\begin{align} \label{3.15c} \langle y_n\! -\! w_n, w_n\! -\! x\rangle & \geq \lambda _n \big[ f(y_n) \! +\! g(w_n)\! -\! (f+g)(x) \! +\! f(w_n)\! -\! f(y_n)\big] \! -\! \delta \| y_n\! -\! w_n\| ^2\nonumber \\ & =\lambda _n\big[(f+g)(w_n)-(f+g)(x)\big]-\delta \| y_n-w_n\| ^2. \end{align}

On using

\begin{align*} 2 \langle y_n-w_n, w_n-x\rangle =\| y_n-x\| ^2-\| y_n-w_n\| ^2-\| w_n-x\| ^2, \end{align*}

we obtain from 32 that

\begin{align} \label{3.16c} \| w_n-x\| ^2 \leq \| y_n-x\| ^2 - 2 \lambda _n \big[(f+g)(w_n)-(f+g)(x)\big]-(1-2\delta )\| y_n-w_n\| ^2. \end{align}

Since \(z\in \Gamma \), then we obtain from 33, we obtain that

\begin{align} \label{3.17c} \| w_n-z\| ^2 \leq \| y_n-z\| ^2-(1-2\delta )\| y_n-w_n\| ^2. \end{align}

From 11, 26 and 34, we get

\begin{align*} \| x_{n+1}-z\| & =\| \beta u + (1-\beta _n)w_n-z\| \nonumber \\ & \leq \beta _n\| u-z\| + (1-\beta _n)\| w_n-z\| \nonumber \\ & \leq \beta _n\| u-z\| + (1-\beta _n)\big[\| x_n-z\| + \beta _nM_1\big]\nonumber \\ & =(1-\beta _n)\| x_n-z\| + \beta _n\big[\| u-z\| + M_1]\nonumber \\ & \leq \max \{ \| x_n-z\| + M_1, \| u-z\| \} ,\nonumber \\ & \vdots \nonumber \\ & \leq \max \{ \| x_1-z\| + M_1, \| u-z\| \} . \end{align*}

Hence, the sequence \(\{ x_n\} \) is bounded. Consequently, it follows that 21-34 that the sequence \(\{ u_n\} , ~ \{ y_n\} \) and \(\{ w_n\} \) are bounded.

Proof â–¼

Lemma 14

Assume that \(\{ y_n\} \) is defined as stated in 11, then

\begin{align*} \| y_n\! -\! z\| ^2 \leq \| u_n-\! z\! \| ^2\! -\! \| y_n\! -\! u_n\| ^2 \! +\! 2 \gamma _{n}\| y_n\! -\! u_n\| \! \cdot \! \| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| . \end{align*}

Proof â–¼
\begin{align*} \| y_n-z\| ^2 & =\| (u_n + \gamma _n A^*(T-I)Au_n)-z\| ^2\\ & \leq \Big\langle u_n-z, u_n + \gamma _n A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n-z\Big\rangle \\ & =\tfrac {1}{2}\big[ \| u_n-z\| ^2 + \| u_n + \gamma _n A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n-z\| ^2\\ & \quad -\| y_n-z-(u_n + \gamma _nA^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n)-z\| ^2\big]\\ & \leq \tfrac {1}{2}\big[\| y_n-z\| ^2 + \| u_n-z\| ^2 + \gamma _n(\gamma _n\| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2\nonumber \\ & \quad -\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2)\\ & \quad -\| y_n-z-(u_n + \gamma _nA^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n-z)\| ^2\big]\\ & \leq \tfrac {1}{2}\big[\| y_n-z\| ^2 + \| u_n-z\| ^2-(\| y_n-u_n\| ^2 \nonumber \\ & \quad + \gamma _n^2\| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2\nonumber \\ & \quad -2\gamma _n\langle y_n-u_n, A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\rangle )\big]\\ & \leq \tfrac {1}{2}\big[\| y_n-z\| ^2+ \| u_n-z\| ^2-\| y_n-u_n\| ^2 \nonumber \\ & + 2 \gamma _n\| y_n-u_n\| \| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| \big]. \end{align*}

Thus, we conclude that

\begin{align} \label{3.2b} & \| y_n-z\| ^2 \leq \\ & \leq \| u_n-z\| ^2-\| y_n-u_n\| ^2 + 2 \gamma _n\| y_n-u_n\| ~ \| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| .\nonumber \end{align}

Proof â–¼

Theorem 15

Assume (1)–(2) holds. Then the sequence \(\{ x_n\} \) generated by 11 strongly converges to the solution \(x^* \in \Gamma ,\) where \(x^*=P_{\Gamma }(x^*)\) denotes the metric projection of \(H_1\) onto the solution set \(\Gamma \).

Proof â–¼
Let \(x^* \in \Gamma \), then we have from algorithm 11 that
\begin{align} \label{3.22} \| u_n-z\| ^2& =\| x_n + \theta _n(x_n-x_{n-1})-z\| ^2\nonumber \\ & =\| (x_n-z) + \theta _n(x_n-x_{n-1})\| ^2\nonumber \\ & =\| x_n-z\| ^2 + 2 \theta _n \langle x_n-z, x_n-x_{n-1}\rangle + \theta _n^2\| x_n-x_{n-1}\| ^2\nonumber \\ & \leq \| x_n-z\| ^2 + 2\theta _n\| x_n-z\| ~ \| x_n-x_{n-1}\| + \theta _n^2\| x_n-x_{n-1}\| \nonumber \\ & \leq \| x_n-z\| ^2 + \theta _n\| x_n-x_{n-1}\| \big[2\| x_n-z\| + \theta _n\| x_n-x_{n-1}\| \big]\nonumber \\ & = \| x_n-z\| ^2 +{\theta _n}\| x_n-x_{n-1}\| M_2, \end{align}

for some \(M_2 {\gt} 0,\) where \(M_2=2\| x_n-x^*\| + \theta _n\| x_n-x_{n-1}\| \). Now from 11 23, 34, 25 and 26, we obtain that

\begin{align} \label{3.23} \| w_n-z\| ^2 \leq & \| x_n-z\| ^2 + \theta _n\| x_n-x_{n-1}\| M_2-\| y_n-u_n\| ^2\nonumber \\ & + 2 \gamma _n\| y_n-u_n\| ~ \| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| \nonumber \\ & -(1-2\delta )\| y_n-w_n\| ^2-\tfrac {1}{2}\tfrac {\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^4}{\| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2}. \end{align}

We conclude from algorithm 11 and 27, we have that

\begin{align} \label{3.3z} \| x_{n+1}-z\| ^2 & \leq (1-\beta _n)\| x_n-z\| ^2 +(1-\beta _n) \theta _n\| x_n-x_{n-1}\| M_2\nonumber \\ & \quad -\tfrac {1}{2}(1-\beta _n)\tfrac {\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^4}{\| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| ^2}\nonumber \\ & \quad -(1-\beta _n)\| y_n-u_n\| ^2 \nonumber \\ & \quad + 2 \gamma _n(1-\beta _n)\| y_n-u_n\| ~ \| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _n h_j}-I)Au_n\| \nonumber \\ & \quad -(1-\beta _n)(1-2\delta )\| y_n-w_n\| ^2 + \beta _n (2\langle u-z, x_{n+1}-z\rangle )\\ & =(1-\beta _n)\| x_n-z\| ^2 \nonumber \\ & \quad + \beta _n\bigg[\tfrac {\theta _n}{\beta _n}\| x_n-x_{n-1}\| (1-\beta _n)M_2 + 2\langle u-z, x_{n+1}-z\rangle \bigg].\label{3.25a} \end{align}

Putting \(d_n=\bigg[\frac{\theta _n}{\beta _n}\| x_n-x_{n-1}\| (1-\alpha _n)M_2 + 2\langle u-z, x_{n+1}-z\rangle \bigg]\), in view of lemma 10, we need to proof that \(\limsup \limits _{k\rightarrow \infty }d_{n_k}\leq 0\) for every \(\{ \| x_{n_k}- x^*\| \} \) of \(\{ \| x_n- x^*\| \} \) satisfying the condition

\begin{align} \label{3.26} \limsup \limits _{k \rightarrow \infty }\{ \| x_{n_{k}}- x^*\| -\| x_{n_{k+1}}- x^*\| \} \leq 0. \end{align}

To show this, suppose that \(\{ \| x_{n_k}- x^*\| \} \) is a subsequence of \(\{ \| x_n- x^*\| \} \) such that 30 holds. Then

\begin{align} \label{3.28} & \limsup _{k \rightarrow \infty }(\| x_{n_{k}}- x^*\| ^2-\| x_{n_{k+1}}- x\| ^2) =\nonumber \\ & =\liminf _{k \rightarrow \infty }\Big((\| x_{n_{k}}- x^*\| -\| x_{n_{k+1}}- x^*\| ) (\| x_{n_{k}}- x^*\| + \| x_{n_{k+1}}- x^*\| )\Big)\nonumber \\ & \leq 0. \end{align}

From 28 and 31, we get

\begin{align} \label{3.3x} & \limsup _{k \rightarrow \infty }\bigg((1-\beta _{n_k})\tfrac {\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _{n_k} h_j}-I)Au_{n_k}\| ^4}{\| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _{n_k} h_j}-I)Au_{n_k}\| ^2} \bigg)\ \nonumber \leq \\ & \leq (1-\beta _{n_k})\| x_{n_k}-x^*\| ^2 -\| x_{n_{k+1}}-x^*\| ^2\nonumber \\ & \quad + \beta _{n_k}\bigg[\tfrac {\theta _{n_k}}{\beta _{n_k}}\| x_{n_k}-x_{{n_k}-1}\| (1-\beta _{n_k})M_2 + 2\langle u-z, x_{n_{k}+1}-z\rangle \bigg]\nonumber \\ & =\limsup _{k \rightarrow \infty }\bigg(\| x_{n_k}-x^*\| ^2-\| x_{n_{k+1}}-x^*\| ^2\bigg)\nonumber \\ & \leq 0. \end{align}

Thus,

\begin{align} \lim _{k \rightarrow \infty }\tfrac {\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _{n_k} h_j}-I)Au_{n_k}\| ^4}{\| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _{n_k} h_j}-I)Au_{n_k}\| ^2}=0, \end{align}

which implies that

\begin{align*} & \tfrac {1}{\| A\| }\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _{n_k} h_j}-I)Au_{n_k}\| \nonumber =\\ & =\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _{n_k} h_j}-I)Au_{n_k}\| \cdot \tfrac {\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _{n_k} h_j}-I)Au_{n_k}}{\| A\| \cdot \| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _{n_k} h_j}-I)Au_{n_k}\| }\\ & \leq \tfrac {\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _{n_k} h_j}-I)Au_{n_k}\| ^2}{\| A^*(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _{n_k} h_j}-I)Au_{n_k}\| } \rightarrow 0. \end{align*}

Hence,

\begin{align} \label{3.31} \lim _{k \rightarrow \infty }\| (S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _{n_k} h_j}-I)Au_{n_k}\| =0. \end{align}

Also, using following the same approach as in 32, we obtain that

\begin{align} \label{3.32} \lim _{k \rightarrow \infty }\| y_{n_k}-u_{n_k}\| =0. \end{align}

Using 28, we get

\begin{align*} & \limsup _{k \rightarrow \infty }\bigg((1-\beta _{n_k})(1-2\delta )\| y_{n_k}-w_{n_k}\| ^2\bigg)\leq \nonumber \\ & \leq (1-\beta _{n_k})\| x_{n_k}-x^*\| ^2 -\| x_{n_{k+1}}-x^*\| ^2\nonumber \\ & \quad + \beta _{n_k}\bigg[\tfrac {\theta _{n_k}}{\beta _{n_k}}\| x_{n_k}-x_{{n_k}-1}\| (1-\beta _{n_k})M_2 +2\langle u-z, x_{n_{k}+1}-z\rangle \bigg]\nonumber \\ & =\limsup _{k \rightarrow \infty }\bigg(\| x_{n_k}-x^*\| ^2-\| x_{n_{k+1}}-x^*\| ^2\bigg)\nonumber \\ & \leq 0. \end{align*}

Thus, we obtain that

\begin{align} \label{3.33} \lim _{k \rightarrow \infty }\| y_{n_k}-w_{n_k}\| =0. \end{align}

From 11, we have

\begin{align} \label{3.34} \| u_{n_k}-x_{n_k}\| \leq \beta _{n_k}\bigg[\tfrac {\theta _{n_k}}{\beta _{n_k}}\| x_{n_k}-x_{n_{k}-1}\| \bigg]\rightarrow 0,~ \text{as}~ k \rightarrow \infty . \end{align}

From 34, 31 and 32, we obtain that

\begin{align} \label{3.31a} \lim _{k \rightarrow \infty }\| u_{n_k}-w_{n_k}\| =0=\lim _{k \rightarrow \infty }\| y_{n_k}-x_{n_k}\| . \end{align}

More so, applying 32 and 33, we get

\begin{align} \label{3.34b} \lim _{k \rightarrow \infty }\| w_{n_k}-x_{n_k}\| =0. \end{align}

Using 11, we obtain that

\begin{align} \label{3.32a} \lim _{k \rightarrow \infty }\| x_{n_{k}+1}-w_{n_k}\| =0. \end{align}

Considering 35, we achieve

\begin{align} \label{3.49} \| x_{n_{k}+1}-x_{n_k}\| \leq \beta _{n_k}\| u-x_{n_k}\| + (1-\beta _{n_k})\| w_{n_k}-x_{n_k}\| \rightarrow 0,~ k \rightarrow \infty . \end{align}

Since \(\{ x_{n_k}\} \) is bounded, there exists a subsequence \(\{ x_{n_{k_j}}\} \) of \(\{ x_{n_k}\} \) which converges weakly to \(x^*\). Also, from 32, 33 and 34, there exist subsequence \(\{ u_{n_{k_j}}\} \) of \(\{ u_{n_k}\} , \{ y_{n_{k_j}}\} \) of \(\{ y_{n_k}\} \) and \(\{ w_{n_{k_j}}\} \) of \(\{ w_{n_k}\} \) which converge weakly to \(x^*\). Using 33, lemma 8, lemma 9 and the fact that \(A\) is a bounded linear operator, \(Ax^* \in F(S\Pi _{j=1}^{m}\operatorname {prox}_{\lambda _{n_k} h_j})\) which implies that \(Ax^* \in F(S) \bigcap F(\operatorname {prox}_{\lambda _{n_k} h_j}), j=1,2,\cdots m \). Hence, we show that \(z \in \Upsilon \).

From the statements in 11, we get that

\begin{align} \label{AN} \lim _{j \rightarrow \infty }\| \nabla f(w_{n_{k_j}})-\nabla f(y_{n_{k_j}})\| =0. \end{align}

Since \(w_{n_{k_j}}=\operatorname {prox}_{\lambda _{n_{k_j}}}g(y_{n_{k_j}}-\lambda _{n_{k_j}}\nabla f(y_{n_{k_j}}))\), it follows from 13 that

\begin{align} \label{AM} \tfrac {y_{n_{k_j}}-\lambda _{n_{k_j}}\nabla f(y_{n_{k_j}})-w_{n_{n_{k_j}}}}{\lambda _{n_{k_j}}} \in \partial g(w_{n_{k_j}}), \end{align}

which implies that

\begin{align} \label{AP} \tfrac {x_{n_{k_j}}-w_{n_{k_j}}}{\lambda _{n_{k_j}}} + \nabla f(w_{n_{k_j}})-\nabla f (x_{n_{k_j}}) \in \nabla f(w_{n_{k_j}}) + \partial g (w_{n_{k_j}}) \subseteq \partial (f+g) (w_{n_{k_j}}). \end{align}

Passing \(j \rightarrow \infty \) and by applying lemma 8 and 31, we obtain that \(x^* \in \Upsilon \). Hence, we conclude that \(x^* \in \Gamma \). Also, we show that

\[ \lim \limits _{k \rightarrow \infty } \langle x_{n_{k}+1}-x^*, f(x_{n_k})-x^*\rangle \leq 0. \]

Let \(z=P_{\Gamma }f(z)\), then we have from 36 that

\begin{align*} \lim _{k \rightarrow \infty } \langle x_{n_{k}+1}-x^*, f(x_{n_k})-x^*\rangle & =\lim _{j \rightarrow \infty } \langle x_{n_{k_j}}-x^*, f(x_{n_{k_j}})-x^*\rangle \\ & \leq \langle z-x^*, f(z)-x^*\rangle . \end{align*}

Hence, we obtain that

\begin{align} \label{3.50} \lim _{k \rightarrow \infty }\langle x_{n_{k}+1}-x^*, f(x_{n_k})-x^*\rangle & \leq \langle z-x^*, f(z)-x^*\rangle \nonumber \\ & \leq 0. \end{align}

On substituting 40 into 29 and applying lemma 10, we conclude that \(\{ x_n\} \) converges strongly to \(z\).

Proof â–¼

4 Numerical example

In this section we give a numerical example in a m-dimensional space of real numbers to support our main result.

Example 16

Let \(H_1=H_2=\mathbb {R}^m\) with the Euclidean norm. For each \(x \in H_1,\) define \(f,g: H_1 \to \mathbb {R} \cup \{ +\infty \} \) by

\[ f(x)=\tfrac {1}{2}\| Ax-b\| ^2, \qquad g(y)=\tfrac {1}{2}\| By-c\| ^2, \]

where \(A, B \in \mathbb {R}^{m \times m}\) and \(b, c \in \mathbb {R}^m.\) It is easy to see that \(f\) and \(g\) are proper lower semicontinuous. Also, we know by [ 21 ] that

\begin{align*} \operatorname {prox}_{\lambda f}(x)& =\operatorname {argmin}_{y \in H}\left[f(y)+\tfrac {1}{2\lambda }\| y-x\| ^2 \right]\nonumber \\ & =(I+A^T A)^{-1}(x+A^T b). \end{align*}

Now, let \(A: H_1 \to H_2\) be defined by

\[ A(x)=\left(\tfrac {x}{1.5} \right), ~ ~ \forall ~ x=\{ x_i\} _{i=1}^{m} \]

then

\[ A^{T}(x)=\left(\tfrac {y}{1.5} \right),~ ~ \forall ~ y=\{ y_i\} _{i=1}^{m}. \]

For each \(j=1,2,\) \(x \in H_2,\) let \(h_j : H_2 \to \mathbb {R} \cup \{ +\infty \} \) be given by

\begin{equation*} h_j(x)=\tfrac {1}{2}\| P_j x-q_j\| ^2, \end{equation*}

where \(P_1, P_2 \in \mathbb {R}^{m \times m}\) and \(q_1, q_2 \in \mathbb {R}^m.\) As before,

\begin{equation*} \operatorname {prox}_{\lambda h_j}(x)=(I+P_{j}^{T}P_j)^{-1}(x+P_jq_j). \end{equation*}

Let the mapping \(S: H_2 \to H_2\) be defined by \(S(x)=\frac{x}{2}.\) For this example, choose \(\beta _n=\frac{1}{2n+3},\) \(u=\frac{1}{2},\) \(\delta =\frac{1}{4},\) \(\mu =\frac{1}{8},\) \(\sigma =\frac{3}{2000}\) and \(\epsilon _n =\frac{1}{n^{1.2}}.\) We choose the initial points \(x_0, x_1 \in \mathbb {R}^m\) randomly in (0,1). By using \(\| x_{n+1}-x_n\| ^2 \leq 10^{-4}\) as our stopping criterion, we conduct this example for various values of \(m.\)

Case I: \(m=10;\)
Case II: \(m=15;\)
Case III: \(m=20;\)
Case IV: \(m=50.\)

The results of this experiment are reported in figure 1.

\includegraphics[width=6.0cm]{m10.jpg} \includegraphics[width=6.0cm]{m15.jpg} \includegraphics[width=6.0cm]{m20.jpg} \includegraphics[width=6.0cm]{m50.jpg}
Figure 1 Top left: Case I, Top right: Case II,
Bottom left: Case III, Bottom right: Case IV.

.

H.A. Abass, K.O. Aremu, L.O. Jolaoso and O.T. Mewomo, An inertial forward-backward splitting method for approximating solutions of certain optimization problem, J. Nonlinear Funct. Anal., 2020 (2020), Article ID 6, https://doi.org/10.23952/jnfa.2020.6.

H.A. Abass, C. Izuchukwu, O.T. Mewomo and Q.L. Dong, Strong convergence of an inertial forward-backward splitting method for accretive operators in real Banach spaces, Fixed Point Theory, 21 (2020) no. 2, pp. 397–412, https://doi.org/10.24193/fpt-ro.2020.2.28.

H.A. Abass, C. Izuchukwu, O.T. Mewomo and F.U. Ogbuisi, An iterative method for solutions of finite families of split minimization problems and fixed point problems, Novi Sad Journal of Mathematics, 49 (2019) no. 1, pp. 117–136, https://doi.org/10.30755/NSJOM.07925.

H.A. Abass and L.O. Jolaoso, An inertial generalized viscosity approximation method for solving multiple-sets split feasibility problem and common fixed point of strictly pseudo-nonspreading mappings, Axioms, 10 (2021) no. 1, art. no. 1, https://doi.org/10.3390/axioms10010001.

M. Abbas, M. Alshahrani, Q.H. Ansari, O.S. Iyiola and Y. Shehu, Iterative methods for solving proximal split minimization problems, Numer. Algor., 78 (2018) no. 1, pp. 193–215, https://doi.org/10.1007/s11075-017-0372-3.

F. Alvarez and H. Attouch, An Inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Anal., 9 (2001), pp. 3–11, https://doi.org/10.1023/A:1011253113155.

Q.H. Ansari and A. Rehan, Split feasibility and fixed point problems. In Nonlinear Analysis: Approximation Theory, Optimization and Application, ed. Q.H. Ansari, pp. 281–322. New York, Springer, 2014, https://doi.org/10.1007/978-81-322-1883-8_9

K. Aoyama, Y. Kimura, W. Takahashi and M. Toyoda, Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space, Nonlinear Anal., 67 (2007), pp. 2350–2360, https://doi.org/10.1016/j.na.2006.08.032.

H.H. Bauschke and P.L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, 408, New York, Springer, 2011, https://doi.org/10.1007/978-3-319-48311-5.

J.Y. Bello Crus and T.T. Nghia, On the convergence of the forward-backward splitting method with line searches, Optim. Methods Softw., 31 (2016) no. 6, pp. 1209–1238, https://doi.org/10.1080/10556788.2016.1214959.

C. Bryne, Iterative oblique projection onto convex subsets and the split feasibility problem, Inverse Problems, 18 (2002) no. 2, pp. 441–453, https://iopscience.iop.org/article/10.1088/0266-5611/18/2/310/meta.

C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Problems, 20 (2004), pp. 103–120, https://iopscience.iop.org/article/10.1088/0266-5611/20/1/006/meta.

C. Bryne, Y. Censor, A. Gibali and S. Reich, The split common null point problem, J. Nonlinear Convex Analysis, 13 (2012), pp. 759–775, https://doi.org/10.48550/arXiv.1108.5953.

R.S. Burachik and A.N. Iusem, Set-valued mappings and enlargements of monotone operators, 8, Boston, Springer Science Business Media, 2007.

Y. Censor, T. Bortfield, B. Martin and A. Trofimov, A unified approach for inversion problems in intensity-modulated radiation therapy, Phys. Med. Biol., 51 (2006), pp. 2353–2365, https://doi.org/10.1088/0031-9155/51/10/001.

Y. Censor and T. Elfving, A multiprojection algorithm using Bregman projections in product space, Numer. Algor., 8 (1994), pp. 221–239, https://doi.org/10.1007/BF02142692

Y. Censor, T. Elfving, N. Kopf and T. Bortfeld, The multiple-sets split feasibility problem and its applications for inverse problems, Inverse Problems, 21 (2005), pp. 2071–2084, https://doi.org/10.1088/0266-5611/21/6/017.

S.S. Chang, J.C. Yao, L. Wang, M. Liu and L. Zhao, On the inertial forward-backward splitting technique for solving a system of inclusion problems in Hilbert spaces, Optimization, 70 (2021), pp. 1–15, https://doi.org/10.1080/02331934.2020.1786567.

C.E Chidume, Geometric properties of Banach spaces and nonlinear iterations, Springer Verlag Series, Lecture Notes in Mathematics, ISBN 978-1-84882-189-7, 2009, https://doi.org/10.1007/978-1-84882-190-3.

W. Cholamjiak, P. Cholamjiak and S. Suantai, An inertial forward-backward splitting method for solving inclusion problems in Hilbert space, J. Fixed Point Theor. Appl., (2018), pp. 20–42, https://doi.org/10.1007/s11784-018-0526-5.

P.L. Combettes and J.C. Pesquet, Proximal splitting methods in signal processing, in H.H. Bauschke, R. Burachik, P.L. Combettes, V. Elser, D.R. Wolkowicz (eds), Fixed Point Algorithms for inverse problems in science and engineering, Springer Optimization and Its Applications, 49, pp. 185–212, Springer, New York, 2011, https://doi.org/10.1007/978-1-4419-9569-8_10.

K. Goebel and S. Reich, Uniform convexity, Hyperbolic Geometry and Nonexpansive Mappings, Marcel Dekker, New York, 1984, https://doi.org/10.1112/blms/17.3.293

F.O. Isiogugu and M.O. Osilike, Convergence theorems for new classes of multivalued hemicontractive-type mappings, Fixed Point Theory and Applications, 93 (2014), https://doi.org/10.1186/1687-1812-2014-93.

L.O. Jolaoso, H.A. Abass and O.T. Mewomo, A Viscosity-Proximal Gradient Method with Inertial Extrapolation for Solving Minimization Problem in Hilbert Space, Arch. Math. (Brno), 55 (2019), pp. 167–194, https://dml.cz/handle/10338.dmlcz/147824.

C. Kanzow and Y. Shehu, Strong convergence of a double-type method for monotone variational inequalities in Hilbert spaces, J. Fixed Point Theory Appl., 20 (2018) no. 1, art. 51, pp. 1–24, https://doi.org/10.1007/s11784-018-0531-8.

Y. Kimura and S. Saejung, Strong convergence for a common fixed points of two different generalizations of cutter operators, Linear Nonlinear Anal., 1 (2015), pp. 53–65.

K. Kunrada and P. Cholamjiak, Strong convergence of the forward-backward splitting algorithms via linesearches in Hilbert spaces, Applicable Analysis, (2021), pp.  1–20, https://doi.org/10.1080/00036811.2021.1986021.

L.J. Lin and W. Takahashi, A general iterative method for hierachical varaitional inequality problems in Hilbert spaces and applications, Positivity, (2012), 16 (2012) no. 3, pp. 429–453, https://doi.org/10.1007/s11117-012-0161-0.

D.A. Lorenz and T. Pock, An inertial forward-backward splitting algorithm fpr monotone inclusions, J. Math. Imaging Vis., 51 (2015), pp. 311–325, https://doi.org/10.1007/s10851-014-0523-2.

A. Moudafi, Split monotone variational inclusions, J. Optim. Theory Appl., 150 (2011), pp. 275–288, https://doi.org/10.1007/s10957-011-9814-6.

P.E. Mainge, Inertial iterative process for fixed points of certain quasi-nonexpansive mappings, Set-Valued Anal., 15 (2007), pp. 67–79, https://doi.org/10.1007/s11228-006-0027-3.

W. Phuengrattana and J. Tiammee, Proximal point algorithms for finding common fixed points of a finite family of quasi-nonexpansive multi-valued mappings in real Hilbert spaces, J. Fixed Point Theory Appl., 20 (2018), pp. 1–14, https://doi.org/10.1007/s11784-018-0590-x.

B.T. Polyak, Some methods of speeding up the convergence of iterates methods, U.S.S.R Comput. Math. Phys., 4 (1964) no. 5, pp. 1–17, https://doi.org/10.1016/0041-5553(64)90137-5.

Y. Shehu and O.S. Iyiola, On a modified extragradient method for variational inequality problem with application to industrial electricity production, J. Ind. Manag. Optim., 15 (2019) no. 1, pp. 319–342, https://doi.org/10.3934/jimo.2018045.

Y. Shehu and F.U. Ogbuisi, An iterative method for solving split monotone variational inclusion and fixed point problems, Revista de la Real Academia de Ciencias Exactas, Fiscas y Naturales, Serie A Matemaicas, 110 (2016) no. 2, pp. 503–518, https://doi.org/10.1007/s13398-015-0245-3.

W. Takahashi, Nonlinear Functional Analysis, Yokohama Publishers, Yokohama, 2000.

W. Takahashi, Introduction to nonlinear and convex analysis, Yokohama Publisher, Yokohama, 2009.

D.V. Thong and D.V. Hieu, A new approximation method for finding common fixed points of families of demicontractive operators and applications, 20 (2018), pp. 1–27, https://doi.org/10.1007/s11784-018-0551-4.

Y. Wang and F. Wang, Strong convergence of the forward-backward splitting method with multiple parameters in Hilbert spaces, Optimization, 67 (2018), no. 4, pp. 493–505, https://doi.org/10.1080/02331934.2017.1411485.

H.K. Xu, Averaged mappings and the gradient-projection algorithm, J. Optim. Theory Appl., 150 (2011), pp. 360–-378, https://doi.org/10.1007/s10957-011-9837-z.

H.Y. Zhou, Convergence theorems of fixed points for strict pseudo-contractions in Hilbert spaces, Nonlinear Anal., 69 (2008) no. 2, pp. 456–462, https://doi.org/10.1016/j.na.2007.05.032.