Abstract
A lower and upper solution method is introduced for control problems related to abstract operator equations. The method is illustrated on a control problem for the Lotka–Volterra model with seasonal harvesting and applied to a control problem of cell evolution after bone marrow transplantation.
Authors
Lorand Gabriel Parajdi
Department of Mathematics West Virginia University P.O. Box 6201, Morgantown, WV 26506, USA email: lorand.parajdi@mail.wvu.edu
Department of Mathematics Babes–Bolyai University M. Kogalniceanu, ClujNapoca, Romania
Radu Precup
Department of Mathematics BabesBolyai University, ClujNapoca, Romania
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy
IoanStefan Haplea
Department of Internal Medicine Iuliu Hatieganu University of Medicine and Pharmacy Victor Babes Street, ClujNapoca, Romania
Keywords
control problem, lower and upper solutions, fixed point, approximation algorithm, numerical solution, medical application.
Paper coordinates
L.G. Parajdi, R. Precup, I.S. Haplea, A method of lowert and upper solutions for control problems and application to a model of bone Marrow transplantation, Int. J. Appl. Math. Comput. Sci., 33 (2023) no. 3, 409–418, http://doi.org/10.34768/amcs20230029
About this paper
Journal
Int. J. Appl. Math. Comput. Sci.
Publisher Name
Sciendo (Walter de Gruyter)
Print ISSN
1641876X
Online ISSN
google scholar link
Paper (preprint) in HTML form
Method of lower and upper solutions for control problems and application to a model of bone marrow transplantation
Abstract
In this paper, it is introduced the lower and upper solution method for control problems related to abstract operator equations. The method is illustrated on a control problem for the LotkaVolterra model with seasonal harvesting and applied to a control problem of cell evolution after bone marrow transplantation.
Keywords: control problem, lower and upper solutions, fixed point, approximation algorithm, numerical solution, medical application
Subject Classification: 34H05, 34K35, 41A65, 65J15, 92C50
1 Introduction
In [3] we have introduced a controllability principle for a general control problem related to operator equations. It consists in finding $(w,\lambda ),$ a solution to the following system
$$\{\begin{array}{c}w=N(w,\lambda )\\ w\in W,\lambda \in \mathrm{\Lambda},(w,\lambda )\in D\end{array}$$  (1.1) 
involving the fixed point equation $w=N(w,\lambda ).$ Here $w$ is the state variable, $\lambda $ is the control variable, $W$ is the domain of the states, $\mathrm{\Lambda}$ is the domain of controls and $D\subset W\times \mathrm{\Lambda}$ is the controllability domain given expression to a certain condition/property imposed to $w,$ or to both $w$ and $\lambda .$ There are no structure imposed to the sets $W,\mathrm{\Lambda}$ and $D$ and no properties for the mapping $N$ from $W\times \mathrm{\Lambda}$ to $W.$
We say that the equation $w=N(w,\lambda )$ is controllable in $W\times \mathrm{\Lambda}$ with respect to $D,$ if problem (1.1) has a solution $(w,\lambda )$. In case that the solution is unique, we say that the equation is uniquely controllable.
In order to describe the solving of the general control problem, let us define the sets
$\mathrm{\Sigma}$  $:$  $=\{(w,\lambda )\in W\times \mathrm{\Lambda}:w=N(w,\lambda )\},$  
${\mathrm{\Sigma}}_{1}$  $:$  $=\{w\in W:\text{there is}\lambda \in \mathrm{\Lambda}\text{with}(w,\lambda )\in \mathrm{\Sigma}\}.$ 
Obviously, the solutions of the control problem (1.1) are the elements of $\mathrm{\Sigma}\cap D.$
Consider now the setvalued map $F:{\mathrm{\Sigma}}_{1}\to \mathrm{\Lambda}$ given by
$$F\left(w\right):=\{\lambda \in \mathrm{\Lambda}:\text{}(w,\lambda )\in \mathrm{\Sigma}\cap D\}.$$ 
Notice that $F$ gives us the expression of the control variable according to the state variable.
The next propositions is a general principle for solving the control problem (1.1).
Proposition 1
If for some extension $\stackrel{~}{F}:W\to \mathrm{\Lambda}$ of $F$ from ${\mathrm{\Sigma}}_{1}$ to $W,$ there exists a fixed point $w\in W$ of the setvalued map
$$\stackrel{~}{N}\left(w\right):=N(w,\stackrel{~}{F}\left(w\right)),$$ 
i.e.,
$$w=N(w,\lambda ),$$  (1.2) 
for some $\lambda \in \stackrel{~}{F}\left(w\right),$ then the couple $(w,\lambda )$ is a solution of the control problem (1.1).
Proof. From (1.2), one has $(w,\lambda )\in \mathrm{\Sigma}.$ Hence $w\in {\mathrm{\Sigma}}_{1}$ and so $\stackrel{~}{F}\left(w\right)=F\left(w\right).$ Then $\lambda \in F\left(w\right)$ and from the definition of $F,$ it follows that $(w,\lambda )\in D.$ Thus $(w,\lambda )$ is a solution of (1.1).
In many applications $F$ and $\stackrel{~}{F}$ are singlevalued maps and $F$ can be extended to $W$ by simply using its very expression.
Two applications for a system modeling cell dynamics related to leukemia have been included in [3] (see also [5]) and three others in [6], from which two selfcontrol problems. Many other illustrations of the above principle can be derived from the rich literature in control theory (see, e.g., the monograph [2]).
The aim of this note is to give an algorithm for approximation of the solutions of general control problems. It basically consists of a bisection method for the control variable inside an interval associated to a pair of lower and upper solutions. The notions of lower and upper solutions for a general control problem are defined accordingly.
2 Lower and upper method for control problems
First, we introduce in the present setframework the notions of lower and upper solutions of a control problem. To this aim, we assume a certain partition of the solution domain
$$W\times \mathrm{\Lambda}=\underset{\xaf}{D}\cup \overline{D},\underset{\xaf}{D}\cap \overline{D}=D$$ 
which allows us to say that the condition of controllability is targeted from the left or from the right. Thus, we can give the following definition.
Definition 2
By a lower (upper) solution of problem (1.1) we mean a pair $(w,\lambda )\in \mathrm{\Sigma}$ $\cap $ $\underset{\xaf}{D}\phantom{\rule{1em}{0ex}}$($\mathrm{\Sigma}\cap \overline{D}$).
In contrast to the statement of the controllability principle from above, given in unstructured sets, from now on we shall assume a certain topological structure of the sets and accordingly the continuity of the map $N.$ The topological framework is a natural one for approximation methods, when a certain meaning is required for the terms of approximant and of method convergence.
Assume that $W$ is a metric space, $\mathrm{\Lambda}=conv\{{\underset{\xaf}{\lambda}}_{0},{\overline{\lambda}}_{0}\}$ is a segment of a normed space with norm $\parallel \cdot \parallel ,$ and that the sets $\underset{\xaf}{D},\overline{D}$ are closed.
In addition, assume that the following conditions are satisfied:
 (h1)

The problem admits a lower solution of the form $({\underset{\xaf}{w}}_{0},{\underset{\xaf}{\lambda}}_{0})$ and an upper solution $({\overline{w}}_{0},{\overline{\lambda}}_{0});$
 (h2)

For each $\sigma \in [0,1],$ there exists a unique $w=:S\left(\sigma \right)\in W$ with $(w,\lambda )\in \mathrm{\Sigma}$ for $\lambda =\lambda \left(\sigma \right):=\left(1\sigma \right){\underset{\xaf}{\lambda}}_{0}+\sigma {\overline{\lambda}}_{0};$
 (h3)

The map $S:$ $[0,1]\to W$ is continuous.
We use now the following bisection algorithm.
Algorithm 3
Let ${\underset{\xaf}{\sigma}}_{0}=0$ and ${\overline{\sigma}}_{0}=1.$ For $k\ge 1$ execute
Step 1: Calculate
$$\sigma :=\frac{{\underset{\xaf}{\sigma}}_{k1}+{\overline{\sigma}}_{k1}}{2}$$ 
and find $S\left(\sigma \right).$
Step 2:
If $(S\left(\sigma \right),\lambda \left(\sigma \right))\in D,$ we are finished and $(S\left(\sigma \right),\lambda \left(\sigma \right))$ is a solution of the control problem; otherwise, take
$\text{(a)}{\underset{\xaf}{\sigma}}_{k}$  $:$  $=\sigma ,{\overline{\sigma}}_{k}:={\overline{\sigma}}_{k1},\text{if}(S\left(\sigma \right),\lambda \left(\sigma \right))\in \underset{\xaf}{D}\setminus D;$  
$\text{(b)}{\underset{\xaf}{\sigma}}_{k}$  $:$  $={\underset{\xaf}{\sigma}}_{k1},{\overline{\sigma}}_{k}:=\sigma ,\text{if}(S\left(\sigma \right),\lambda \left(\sigma \right))\in \overline{D}\setminus D,$ 
then set $k:=k+1$ and go to Step 1.
The algorithm leads either to a solution when it stops, or to two sequences $\left({\underset{\xaf}{\sigma}}_{k}\right)$ and $\left({\overline{\sigma}}_{k}\right)$ such that
 (i)

$0\le {\underset{\xaf}{\sigma}}_{1}\le \mathrm{\cdots}\le {\underset{\xaf}{\sigma}}_{k}\le {\underset{\xaf}{\sigma}}_{k+1}\le \mathrm{\cdots}\le 1\phantom{\rule{1em}{0ex}}$and$(S\left({\underset{\xaf}{\sigma}}_{k}\right),\lambda \left({\underset{\xaf}{\sigma}}_{k}\right))\in \underset{\xaf}{D},$
 (ii)

$0\le \mathrm{\cdots}\le {\overline{\sigma}}_{k+1}\le {\overline{\sigma}}_{k}\le \mathrm{\cdots}\le {\overline{\sigma}}_{1}\le 1\phantom{\rule{1em}{0ex}}$and$(S\left({\overline{\sigma}}_{k}\right),\lambda \left({\overline{\sigma}}_{k}\right))\in \overline{D},$
 (iii)

$0\le {\overline{\sigma}}_{k}{\underset{\xaf}{\sigma}}_{k}=\frac{1}{{2}^{k}}.$
Then the sequences $\left({\underset{\xaf}{\sigma}}_{k}\right)$ and $\left({\overline{\sigma}}_{k}\right)$ are convergent and in virtue of (iii) their limits are equal, say ${\sigma}^{\ast}.$ Clearly, ${\sigma}^{\ast}\in [0,1].$ Furthermore, using the continuity of $S$ and the fact that $\underset{\xaf}{D},\overline{D}$ are closed, from $(S\left({\underset{\xaf}{\sigma}}_{k}\right),\lambda \left({\underset{\xaf}{\sigma}}_{k}\right))\in \underset{\xaf}{D}$ we obtain $(S\left({\sigma}^{\ast}\right),\lambda \left({\sigma}^{\ast}\right))\in \underset{\xaf}{D}.$ Similarly $(S\left({\sigma}^{\ast}\right),\lambda \left({\sigma}^{\ast}\right))\in \overline{D}.$ Hence $(S\left({\sigma}^{\ast}\right),\lambda \left({\sigma}^{\ast}\right))\in D$ that is $(S\left({\sigma}^{\ast}\right),\lambda \left({\sigma}^{\ast}\right))$ solves the control problem and proves the convergence of the algorithm. Thus, we have proved the following theorem.
Theorem 4
Remark 5
Theorem 4 guarantees that the algorithm finishes after a finite number of iterations under a stop criterion of the form
$$\Vert {\overline{\sigma}}_{k}{\underset{\xaf}{\sigma}}_{k}\Vert \le \epsilon ,$$ 
for any given $\epsilon $ with $$ Then any one of the pairs $(S\left({\underset{\xaf}{\sigma}}_{k}\right),\lambda \left({\underset{\xaf}{\sigma}}_{k}\right))$and $(S\left({\overline{\sigma}}_{k}\right),\lambda \left({\overline{\sigma}}_{k}\right)),$ where $k$ corresponds to the last iteration, can be considered an approximation of a solution to the control problem. The error for the control parameter $\lambda $ is less than or equal to $\left(\Vert {\underset{\xaf}{\lambda}}_{0}\Vert +\Vert {\overline{\lambda}}_{0}\Vert \right)\epsilon .$
Remark 6
The above result does not require any ordering between ${\underset{\xaf}{\lambda}}_{0},{\overline{\lambda}}_{0}.$
Remark 7
For concrete applications, the computer implementation of the above algorithm needs in addition an approximation procedure for the solution operator $S.$ Then the algorithm is in fact applied to an approximation ${S}_{aprox}$ of $S.$ Such approximations can be done using the method of successive approximations (guaranteed by Banach and Perov fixed point theorems), Newton’s method, techniques of upper and lower solutions, variational principles, finite element methods, etc.
An example. Consider the LotkaVolterra model with seasonal harvesting
$$\{\begin{array}{c}{x}^{\prime}=ax\left(1by\right)g\left(t\right)\hfill \\ {y}^{\prime}=cy\left(1dx\right)h\left(t\right),\hfill \end{array}$$  (2.1) 
where $g,h\in {L}^{1}(0,T).$
Assume that for two routine harvesting policies $({g}_{1},{h}_{1})$ and $({g}_{2},{h}_{2}),$ the ratio between the two species $x\left(T\right)/y\left(T\right)$ at the end $T$ of a season is below and respectively, above a desired level $r.$ The control problem is to find the appropriate harvesting policy $(g,h)$ to achieve the ratio $r$ considered optimal. Clearly, under some given initial values ${x}_{0},{y}_{0},$ the system (2.1) reads equivalently as a fixed point equation
$$\{\begin{array}{c}x\left(t\right)={x}_{0}+{\int}_{0}^{t}\left(ax\left(s\right)\left(1by\left(s\right)\right)g\left(s\right)\right)\mathit{d}s\hfill \\ y\left(t\right)={y}_{0}+{\int}_{0}^{t}\left(cy\left(s\right)\left(1dx\left(s\right)\right)h\left(s\right)\right)\mathit{d}s.\hfill \end{array}$$  (2.2) 
Compared to the abstract control problem stated in Section 1, here $w=(x,y),\lambda =(g,h),$ $W=C([0,T];{(0,+\mathrm{\infty})}^{2}),$ ${\underset{\xaf}{\lambda}}_{0}=({g}_{1},{h}_{1}),$ ${\overline{\lambda}}_{0}=({g}_{2},{h}_{2}),$ $\mathrm{\Lambda}=\{({g}_{\sigma},{h}_{\sigma}):\sigma \in [0,1]\},$ where ${g}_{\sigma}=\left(1\sigma \right){g}_{1}+\sigma {g}_{2},$ ${h}_{\sigma}=\left(1\sigma \right){h}_{1}+\sigma {h}_{2},$
$D$  $=$  $\{(x,y,g,h):x\left(T\right)/y\left(T\right)=r\},$  
$\underset{\xaf}{D}$  $=$  $\{(x,y,g,h):x\left(T\right)/y\left(T\right)\le r\},$  
$\overline{D}$  $=$  $\{(x,y,g,h):x\left(T\right)/y\left(T\right)\ge r\}.$ 
Also, according to Definition 2, $({x}_{1},{y}_{1},{g}_{1},{h}_{1})$ and $({x}_{2},{y}_{2},{g}_{2},{h}_{2})$ (with $({x}_{i},{y}_{i})$ the solution of system (2.2) for $g={g}_{i}$ and $h={h}_{i},i=1,2$) are lower and upper solutions of the control problem, respectively.
Notice that it seems natural that the appropriate policy should be an intermediary between the two routine policies. Our algorithm looks for it as a convex combination of the two routine strategies and thus the control problem reduces to finding $\sigma \in [0,1]$ for which the solution $(x,y)$ of the system
$$\{\begin{array}{c}{x}^{\prime}=ax\left(1by\right){g}_{\sigma}\left(t\right)\hfill \\ {y}^{\prime}=cy\left(1dx\right){h}_{\sigma}\left(t\right),\hfill \end{array}$$ 
with ${g}_{\sigma}=\left(1\sigma \right){g}_{1}+\sigma {g}_{2},$ ${h}_{\sigma}=\left(1\sigma \right){h}_{1}+\sigma {h}_{2}$ and some given initial values ${x}_{0},{y}_{0},$ satisfy
$$\frac{x\left(T\right)}{y\left(T\right)}=r.$$ 
3 Control of cell evolution after bone marrow transplantation
In the paper [10] (see also [7]) the following system has been introduced as a model of cell dynamics after bone marrow transplantation
$$\{\begin{array}{c}{x}^{\prime}=\frac{a}{1+b(x+y+z)}\frac{x+y}{x+y+gz}xcx\\ \\ {y}^{\prime}=\frac{A}{1+B(x+y+z)}\frac{x+y}{x+y+Gz}yCy\\ \\ {z}^{\prime}=\frac{a}{1+b(x+y+z)}\frac{z}{z+h\left(x+y\right)}zcz.\end{array}$$  (3.1) 
Here $x\left(t\right),y\left(t\right),z\left(t\right)$ stand for the normal, leukemic and donor cell populations at time $t,$ after transplant time $t=0$ when their concentrations are supposed to have been ${x}_{0},{y}_{0},{z}_{0}\phantom{\rule{0.3888888888888889em}{0ex}}\left(>0\right).$ Parameters $a,A$ stand for the growth rates of normal and leukemic cells; $c,C$ are their cell death rates; and $b,B$ are their microenvironment sensitivity rates. Also, the parameters $h,g,G$ stand for the intensity of antigraft, antihost and antileukemia effects, respectively. They totalize a large number of cell biophysical properties, as well as exterior stimulants and inhibition during the immunotherapy. Values of the parameters $h,g,G$ close to zero correspond to weak interactions, larger values quantify strong effects and their pretransplant estimate would be crucial for transplant strategy (conditioning treatment, dose of infused cells and posttransplant immunosuppressive therapy). We note that the model was refined in [4] by considering instead of the same sensitivity rate $b$ for normal and leukemic cells, different sensitivity rates ${b}_{1}$ and ${b}_{2}$, respectively.
In paper [8], the equilibria of system (3.1) are found and their stability is established in terms of the system parameters. According to the main result in [8] the system has, as the numerical simulations in [9] suggested, only two asymptotically stable equilibria, namely the ”bad” equilibrium ${P}_{2}(0,D,0)$ and the ”good” one ${P}_{3}(0,0,d).$ Here the values
$$d=\frac{1}{b}\left(\frac{a}{c}1\right),D=\frac{1}{B}\left(\frac{A}{C}1\right)$$ 
can be seen as homeostatic normal and cancer levels. Also, the octant of the positive states $(x,y,z)$ splits into two basins of attraction of the two equilibria, the ”bad” basin corresponding to $(0,D,0),$ and the ”good” one for $(0,0,d)$ (see Fig. 1 and the Appendix for its construction). This means that if at a given time ${t}_{0}$ the state $(x\left({t}_{0}\right),y\left({t}_{0}\right),z\left({t}_{0}\right))$ belongs to the basin of attraction of any of the two equilibria, then the entire trajectory $(x\left(t\right),y\left(t\right),z\left(t\right))$ for $t\ge {t}_{0}$ remains in the same basin and, as a result, in time, approaches the corresponding equilibrium. Thus, a transplant appears as successful if the initial state $({x}_{0},{y}_{0},{z}_{0})$ is located in the good basin, which happens if ${z}_{0}$ is sufficiently large compared with ${x}_{0}$ and ${y}_{0}.$ Also, an unsuccessful transplant could be turned into a successful one if by any methods/therapies one can move the state $(x,y,z)$ from the bad basin into the good one, or if we can enlarge the good basin to catch the state inside. In fact, for any transplant, one could preventively apply the same strategy in order to move the state $(x,y,z)$ (even if located in the good basin) away from the separating surface between the two basins, to be sure that further perturbations can not change the good evolution towards equilibrium $(0,0,d).$
From the main result in [8] we have that the separation surface $S$ of the two basins of attraction intersects the plane $y=0$ after the line
$$z=\sqrt{\frac{h}{g}}x,$$ 
and the plane $x=0$ after a curve closed to the line
$$z=\frac{\frac{A}{C}1}{G}y.$$ 
Thus, the good basin of attraction increases if at least one of the two lines goes down, which happens if one can make $h/g$ or/and $A/C$ to decrease, or/and $G$ to increase.
Therefore, the basin of attraction of the good equilibrium ${P}_{3}$ is enlarged if the separation surface $S$ goes down (see Fig. 2 and Fig. 3) and this can be achieved by

1.
decreasing of antigraft parameter $h;$

2.
increasing of antihost parameter $g;$

3.
increasing of anticancer parameter $G;$

4.
decreasing of growth rate of cancer cells $A;$

5.
increasing of death rate of cancer cells $C.$
Such changes in these parameters find their counterpart in posttransplant medical practice, namely the following therapies: immunosuppressive therapy (related to $h$), posttransplant consolidation chemotherapy (related to $A$ and $C$) and donor Tlymphocyte infusion (related to $g$ and $G$). In [9] a series of imaginary scenarios combining these therapies have been designed and it has been shown that success depends on their intensity, the time interval in which they are applied and the time after transplantation at which they are initiated.
Denote by ${\lambda}_{i}\left(t\right),i=1,2,\mathrm{\dots},5$ the factors with which the parameters $h,g,G,A$ and $C$ are modified on a time interval $[0,T]$ and assume that as functions they belong to ${L}^{\mathrm{\infty}}(0,T).$ For example, we can consider these functions piecewise constant, when the value one on a certain subinterval would correspond to an interruption of the therapy. Also, denote by $\lambda \left(t\right)$ the vector in ${\mathbb{R}}^{5}$ having these components.
Assuming that after transplant one has $y\simeq 0$ on the time interval $[0,T]$ and that the patient’s condition ${w}_{0}=({x}_{0},{y}_{0},{z}_{0})$ is in the bad basin, that is, $$ we want to decrease the $h/g$ ratio to reach the goal
$$\frac{z\left(T\right)}{x\left(T\right)}>\sqrt{\frac{h}{g}}$$ 
meaning that the patient’s condition is brought to time $T$ in the good basin, evolving after that to the good attractor ${P}_{3}$. Obviously, the patient’s exposure to corrective therapies should be reduced as much as possible. In order to find such a minimal therapy according to $\lambda \left(t\right)$ we can apply our lower and upper solution method.
Clearly, for a lower solution $({\underset{\xaf}{x}}_{0},{\underset{\xaf}{y}}_{0},{\underset{\xaf}{z}}_{0},{\underset{\xaf}{\lambda}}_{0})$we can take the vector ${\underset{\xaf}{\lambda}}_{0}=(1,1,1,1,1)$ which corresponds to the absence of any posttransplant therapy. Then
$$ 
An upper solution $({\overline{x}}_{0},{\overline{y}}_{0},{\overline{z}}_{0},{\overline{\lambda}}_{0})$can be chosen by checking several vector functions $\lambda \left(t\right)$ with step function components satisfying $0\le {\lambda}_{i}\left(t\right)\le 1$ for $i=1,4$ and ${\lambda}_{i}\left(t\right)\ge 1$ for $i=2,3,5.$ For it one has
$$\frac{{\overline{z}}_{0}\left(T\right)}{{\overline{x}}_{0}\left(T\right)}>\sqrt{\frac{h}{g}}.$$ 
Once this upper solution is found, the algorithm starts and continues until the first step $k$ at which, for $\lambda =\lambda \left({\overline{\sigma}}_{k}\right),$ one has
$$\frac{z\left(T\right)}{x\left(T\right)}\le \sqrt{\frac{h}{g}}+\delta $$ 
for an acceptable margin $$ Then the vector $\lambda \left({\overline{\sigma}}_{k}\right)=\left(1{\overline{\sigma}}_{k}\right){\underset{\xaf}{\lambda}}_{0}+{\overline{\sigma}}_{k}{\overline{\lambda}}_{0}$ can be a good approximation for the control $\lambda .$
In this case, referring to the general framework, we have $W=C([0,T];{(0,+\mathrm{\infty})}^{3}),w=(x,y,z),$ $\mathrm{\Lambda}=\{\lambda \left(\sigma \right):\sigma \in [0,1]\},$ where $\lambda \left(\sigma \right)=\left(1\sigma \right){\underset{\xaf}{\lambda}}_{0}+\sigma {\overline{\lambda}}_{0},$
$$D=\{(w,\lambda ):\frac{z\left(T\right)}{x\left(T\right)}=\sqrt{\frac{h}{g}}\},$$ 
$$\underset{\xaf}{D}=\{(w,\lambda ):\frac{z\left(T\right)}{x\left(T\right)}\le \sqrt{\frac{h}{g}}\},\overline{D}=\{(w,\lambda ):\frac{z\left(T\right)}{x\left(T\right)}\ge \sqrt{\frac{h}{g}}\}.$$ 
Also $N=({N}_{1},{N}_{2},{N}_{3})$ is the integral operator associated to the equivalent integral system.
First, we prove that the solution operator $S$ is welldefined, that is condition (h2) holds.
Lemma 8
For each $\lambda =\lambda \left(\sigma \right)\in \mathrm{\Lambda},$ the initial value problem $w=N(w,\lambda ),w\left(0\right)={w}_{0}$ has a unique solution $w=S\left(\sigma \right)\in W.$
Proof. First, note the Lipschitz continuity of the system nonlinearities with respect to the variables $x,y$ and $z.$ Thus the qualitative theory on the Cauchy problem applies to our situation including the result about the behavior of the saturated solutions in a neighborhood of the boundary of the domain where the system is defined, here $[0,T]\times {(0,+\mathrm{\infty})}^{3}$ (see [1, Theorem 2.10]). Thus, it remains to prove that the saturated solution of the initial value problem does not fail at the boundary. Assume the contrary. Then there is ${t}_{0}\in (0,T]$ such that $x\left(t\right),y\left(t\right),z\left(t\right)>0$ in $[0,{t}_{0})$ and the limit at ${t}_{0}$ of at least one of the three functions equals zero. Let $x\left(t\right)\to 0$ as $t\to {t}_{0}.$ From the first equation of the system, we have
$${x}^{\prime}\left(t\right)\ge cx\left(t\right),t\in [0,{t}_{0}),$$ 
whence $x\left(t\right)\ge {x}_{0}{e}^{ct},$ which leads to a contradiction with our assumption. Similarly if $z\left(t\right)\to 0$ as $t\to {t}_{0}.$ The same if $y\left(t\right)\to 0$ as $t\to {t}_{0},$ when the contradiction comes from the estimate $y\left(t\right)\ge {y}_{0}{e}^{C{\int}_{0}^{t}{\lambda}_{5}\left(\sigma \right)\left(s\right)\mathit{d}s}.$
Finally, we prove the continuous dependence on $\sigma $ of the solution $S\left(\sigma \right)=(x,y,z),$ that is condition (h3).
Lemma 9
The solution operator is continuous from $[0,1]$ to $C([0,T];{\mathbb{R}}^{3}).$
Proof. From the integral system, using the Lipschitz continuity of the nonlinearities and the Volterra property of the equations, we deduce for $i=1,2,3,$ the estimates of the form
${\Vert {S}_{i}({\sigma}_{1}){S}_{i}({\sigma}_{2})\Vert}_{\theta}$  $\le $  ${\alpha}_{i1}{\Vert {S}_{1}({\sigma}_{1}){S}_{1}({\sigma}_{2})\Vert}_{\theta}+{\alpha}_{i2}{\Vert {S}_{2}({\sigma}_{1}){S}_{2}({\sigma}_{2})\Vert}_{\theta}$  
$+{\alpha}_{i3}{\Vert {S}_{3}({\sigma}_{1}){S}_{3}({\sigma}_{2})\Vert}_{\theta}+{\beta}_{i}\left{\sigma}_{1}{\sigma}_{2}\right$ 
with respect to a Bielecki norm $\parallel \cdot {\parallel}_{\theta}$ on $C[0,T]$ and any number $\theta >0$. Choosing $\theta $ sufficiently large, we have that the spectral radius of the matrix $M:={[{\alpha}_{ij}]}_{1\le i,j\le 3}$ is subunitary. It follows that
$$\left[\begin{array}{c}{\Vert {S}_{1}({\sigma}_{1}){S}_{1}({\sigma}_{2})\Vert}_{\theta}\\ {\Vert {S}_{2}({\sigma}_{1}){S}_{2}({\sigma}_{2})\Vert}_{\theta}\\ {\Vert {S}_{3}({\sigma}_{1}){S}_{3}({\sigma}_{2})\Vert}_{\theta}\end{array}\right]\le \left{\sigma}_{1}{\sigma}_{2}\right{(IM)}^{1}\left[\begin{array}{c}{\beta}_{1}\\ {\beta}_{2}\\ {\beta}_{3}\end{array}\right].$$ 
This clearly shows the continuity on $\sigma $ of ${S}_{1},{S}_{2}$ and ${S}_{3}.$
Remark 10
Assuming that after transplant one has $x\simeq 0$ on the time interval $[0,T],$ we may apply similarly the algorithm in order to reach alternatively the goal
$$\frac{z\left(T\right)}{y\left(T\right)}>\frac{\frac{A}{C}1}{G}.$$ 
Acknowledgments:
LG Parajdi benefited from financial support through the project ”Development of advanced and applied research skills in the logic of STEAM+ Health”, POCU/993/6/13/153310, a project cofinanced by the European Social Fund through the Human Capital Operational Program 20142020.
References
 [1] Barbu, V.: Differential Equations, Springer, Cham (2016). https://doi.org/10.1007/9783319452616
 [2] Coron, J.M.: Control and Nonlinearity, Mathematical Surveys and Monographs Vol. 136, Amer. Math. Soc., Providence (2007). https://doi.org/http://dx.doi.org/10.1090/surv/136
 [3] Haplea, I.Ş., Parajdi, L.G., Precup, R.: On the controllability of a system modeling cell dynamics related to leukemia, Symmetry 13(10), 1867 (2021). https://doi.org/10.3390/sym13101867
 [4] Parajdi, L.G.: Stability of the equilibria of a dynamic system modeling stem cell transplantation, Ric. Mat. 69(2), 579–601 (2020). https://doi.org/10.1007/s11587019004739
 [5] Parajdi, L.G., Patrulescu, F.O., Precup, R., Haplea, I.Ş.: Two numerical methods for solving a nonlinear system of integral equations of mixed VolterraFredholm type arising from a control problem related to leukemia, submitted.
 [6] Precup, R.: On some applications of the controllability principle for fixed point equations, Results Appl. Math. 13, 100236 (2022). https://doi.org/10.1016/j.rinam.2021.100236
 [7] Precup, R., Dima, D., Tomuleasa, C., Şerban, M.A., Parajdi, L.G.: Theoretical Models of Hematopoietic Cell Dynamics Related to Bone Marrow Transplantation, in Frontiers in Stem Cell and Regenerative Medicine Research. Volume 8, eds AttaurRahman, Shazia Anjum, Bentham Science, Sharjah, pp 202–241 (2018). https://doi.org/10.2174/9781681085890118080008
 [8] Precup, R., Şerban, M.A., Trif, D.: Asymptotic stability for a model of cell dynamics after allogeneic bone marrow transplantation, Nonlinear Dyn. Syst. Theory 13(1), 79–92 (2013). https://www.endst.kiev.ua/v13n1/7(42).pdf
 [9] Precup, R., Şerban, M.A., Trif, D., Cucuianu, A.: A planning algorithm for correction therapies after allogeneic stem cell transplantation, J. Math. Model. Algor. 11(3), 309323 (2012). https://doi.org/10.1007/s1085201291873
 [10] Precup, R., Trif, D., Şerban M.A., Cucuianu, A.: A mathematical approach to cell dynamics before and after allogeneic bone marrow transplantation, Ann. Tiberiu Popoviciu Semin. Funct. Equ. Approx. Convexity 8, 167–175 (2010). http://atps.tucn.ro/pdf/82010atpsabstracts.pdf
Appendix: A numerical method for constructing the separation
surface between the two basins of attraction
We aim to build numerically the surface $S$ that separates the basin of
attraction of the ”good” attractor (when the bone marrow transplant
succeeds) from the basin of the ”bad” attractor (when the transplant fails).
Let $C$ be the curve at the intersection between the surface $S$ and the
horizontal plane at $z=\stackrel{~}{z}$. For numerical tractability, we
approximate the surface $S$ by a ruled surface which connects the origin of
the axes with the curve $C$.
The surface $S$ intersects the plane $x=0$ approximately along a line
$$z=\frac{\frac{A}{C}1}{G}y,$$ 
and the plane $y=0$ along the line
$$z=\sqrt{\frac{h}{g}}x.$$ 
The above two lines intersect the plane $z=\stackrel{~}{z}$ in the points
$${M}_{0}=(0,\frac{G}{\frac{A}{C}1}\stackrel{~}{z},\stackrel{~}{z})\phantom{\rule{1.5em}{0ex}}\text{and}\phantom{\rule{1.5em}{0ex}}{M}_{1}=(\sqrt{\frac{g}{h}}\stackrel{~}{z},0,\stackrel{~}{z}),$$ 
respectively. We wish to compute the coordinates of $N1$ points on the curve $C$. To this end,
1. Let there be an equidistant partition of the segment ${M}_{0}{M}_{1}$, i.e. a series of points
$${M}_{k/N}=({x}_{k},{y}_{k},\stackrel{~}{z}),k=1,2,\mathrm{\dots},N1,$$ 
where
$${y}_{k}=\frac{\frac{G}{\frac{A}{C}1}}{\sqrt{\frac{g}{h}}}\left(\sqrt{\frac{g}{h}}\stackrel{~}{z}{x}_{k}\right)\phantom{\rule{2em}{0ex}}\left(\text{suchas}{M}_{k/N}\in {M}_{0}{M}_{1}\right)$$ 
and
$$\left{M}_{0}{M}_{k/N}\right=\frac{k}{N}\left{M}_{0}{M}_{1}\right,$$ 
such as the points ${M}_{k/N}$ divide the segment ${M}_{0}{M}_{1}$ into $N$ equal parts. Equivalently, in vector notation:
$$\overrightarrow{O{M}_{k/N}}=\overrightarrow{O{M}_{0}}+\frac{k}{N}\overrightarrow{{M}_{1}{M}_{0}}.$$ 
Componentwise, we have
${x}_{k}$  $=$  ${x}_{{M}_{0}}+{\displaystyle \frac{k}{N}}\left({x}_{{M}_{1}}{x}_{{M}_{0}}\right)={\displaystyle \frac{k}{N}}\sqrt{{\displaystyle \frac{g}{h}}}\stackrel{~}{z},$  
${y}_{k}$  $=$  ${y}_{{M}_{0}}+{\displaystyle \frac{k}{N}}\left({y}_{{M}_{1}}{y}_{{M}_{0}}\right)=\left(1{\displaystyle \frac{k}{N}}\right){\displaystyle \frac{G}{\frac{A}{C}1}}\stackrel{~}{z}.$ 
2. For each $k=1,2,\mathrm{\dots},N1$ we compute the following: Let there be in the plane $z=\stackrel{~}{z}$ the normal on ${M}_{0}{M}_{1}$ in ${M}_{k/N}$ and, on this normal, two series of equidistant points ${\underset{\xaf}{S}}_{i},{\overline{S}}_{i}$, spaced at $\epsilon $ from each other
${x}_{{\underset{\xaf}{S}}_{i}}$  $$  ${x}_{{M}_{k/N}}\text{,}{x}_{{\underset{\xaf}{S}}_{i}}{x}_{{M}_{k/N}}\text{,}$  
$\left{\underset{\xaf}{S}}_{i}\text{}{M}_{k/N}\right$  $=$  $i\epsilon ,\left{\overline{S}}_{i}{M}_{k/N}\right=i\epsilon \phantom{\rule{2.5em}{0ex}}\left(i=1,2,\mathrm{\dots}\right)$ 
that is the points are situated on the normal on ${M}_{0}{M}_{1},$ on opposite sides of ${M}_{k/N}.$ Vectorially
$$\overrightarrow{O{\underset{\xaf}{S}}_{i}}=\overrightarrow{O{M}_{k/N}}i\epsilon \overrightarrow{v},\overrightarrow{O{\overline{S}}_{i}}=\overrightarrow{O{M}_{k/N}}+i\epsilon \overrightarrow{v},$$ 
where $\overrightarrow{v}$ is the unit vector orthogonal to $\overrightarrow{{M}_{1}{M}_{0}},$ that is
$$\overrightarrow{v}=\frac{1}{\sqrt{{\left(\frac{G}{\frac{A}{C}1}\stackrel{~}{z}\right)}^{2}+{\left(\sqrt{\frac{g}{h}}\stackrel{~}{z}\right)}^{2}}}(\frac{G}{\frac{A}{C}1}\stackrel{~}{z},\sqrt{\frac{g}{h}}\stackrel{~}{z}).$$ 
Componentwise, for all points ${\underset{\xaf}{S}}_{i},{\overline{S}}_{i}$ we have
${x}_{{\underset{\xaf}{S}}_{i}}$  $=$  ${x}_{k}i\epsilon {x}_{\overrightarrow{v}}={\displaystyle \frac{k}{N}}\sqrt{{\displaystyle \frac{g}{h}}}\stackrel{~}{z}i\epsilon {\displaystyle \frac{\frac{G}{\frac{A}{C}1}\stackrel{~}{z}}{\sqrt{{\left(\frac{G}{\frac{A}{C}1}\stackrel{~}{z}\right)}^{2}+{\left(\sqrt{\frac{g}{h}}\stackrel{~}{z}\right)}^{2}}}},$  
${y}_{{\underset{\xaf}{S}}_{i}}$  $=$  ${y}_{k}i\epsilon {y}_{\overrightarrow{v}}=\left(1{\displaystyle \frac{k}{N}}\right){\displaystyle \frac{G}{\frac{A}{C}1}}\stackrel{~}{z}i\epsilon {\displaystyle \frac{\sqrt{\frac{g}{h}}\stackrel{~}{z}}{\sqrt{{\left(\frac{G}{\frac{A}{C}1}\stackrel{~}{z}\right)}^{2}+{\left(\sqrt{\frac{g}{h}}\stackrel{~}{z}\right)}^{2}}}}$ 
and respectively
${x}_{{\overline{S}}_{i}}$  $=$  ${x}_{k}+i\epsilon {x}_{\overrightarrow{v}}={\displaystyle \frac{k}{N}}\sqrt{{\displaystyle \frac{g}{h}}}\stackrel{~}{z}+i\epsilon {\displaystyle \frac{\frac{G}{\frac{A}{C}1}\stackrel{~}{z}}{\sqrt{{\left(\frac{G}{\frac{A}{C}1}\stackrel{~}{z}\right)}^{2}+{\left(\sqrt{\frac{g}{h}}\stackrel{~}{z}\right)}^{2}}}},$  
${y}_{{\overline{S}}_{i}}$  $=$  ${y}_{k}+i\epsilon {y}_{\overrightarrow{v}}=\left(1{\displaystyle \frac{k}{N}}\right){\displaystyle \frac{G}{\frac{A}{C}1}}\stackrel{~}{z}+i\epsilon {\displaystyle \frac{\sqrt{\frac{g}{h}}\stackrel{~}{z}}{\sqrt{{\left(\frac{G}{\frac{A}{C}1}\stackrel{~}{z}\right)}^{2}+{\left(\sqrt{\frac{g}{h}}\stackrel{~}{z}\right)}^{2}}}}.$ 
To find an approximation of the surface $S$ we have employed the following
pseudocode algorithm (implemented in Matlab R2021a):
FOR each $k=1,2,\mathrm{\dots},N1$ we test whether the point ${M}_{k/N}$ lies inside the good or the bad basin of attraction, by following the trajectory of the solution that starts from that point (integrating using the coordinates of the point ${M}_{k/N}$ as initial conditions).
IF the trajectory leads to the bad attractor, we test in sequence the orbits of the points ${\underset{\xaf}{S}}_{i}$, $i=1,2,\mathrm{\dots}$, until we reach the first point whose orbit lands on the good attractor. The midpoint of the segment determined by the last two tested points will belong, within an $\epsilon /2$ approximation, to the curve $C$ / surface $S$.
IF the trajectory leads to the good attractor, we test in sequence the orbits of the points ${\overline{S}}_{i}$, $i=1,2,\mathrm{\dots}$, until we reach the first point whose orbit lands on the bad attractor. The midpoint of the segment determined by the last two tested points will belong, within an $\epsilon /2$ approximation, to the curve $C$ / surface $S$.
END_FOR
Every point on the curve $C$, as found above, is then connected with the origin. The resulting ruled surface is a rough but useful approximation of the surface $S$, computable in linear time with respect to $N$ and $1/\epsilon $.
Concerning the parameters for these simulations (see Fig. 1, Fig. 2 and Fig. 3), the following values are taken: $a=0.23,b=2.2\times {10}^{8},c=0.01,A=0.43,B=5.5\times {10}^{9},C=0.03,g=25,G=4,h=20,$ where $$. In all the simulations we have used the following values for $N=10$, $\epsilon ={10}^{8}$, $\stackrel{~}{z}={10}^{9}$.
[1] Barbu, V. (2016). Differential Equations, Springer, Cham.
[2] Coron, J.M. (2007). Control and Nonlinearity, Mathematical Surveys and Monographs, Vol. 136, American Mathematical Society, Providence.
[3] DeConde, R., Kim, P.S., Levy, D. and Lee, P.P. (2005). Posttransplantation dynamics of the immune response to chronic myelogenous leukemia, Journal of Theoretical Biology 236(1): 39–59.
[4] Foley, C. and Mackey, M.C. (2009). Dynamic hematological disease: A review, Journal of Mathematical Biology 58(1): 285–322.
[5] Haplea, I. ¸S., Parajdi, L.G. and Precup, R. (2021). On the controllability of a system modeling cell dynamics related to leukemia, Symmetry 13(10): 1867.
[6] Kelley, C.T. (1995). Iterative Methods for Linear and Nonlinear Equations, SIAM, Philadelphia.
[7] Kim, P.S., Lee, P.P. and Levy, D. (2007). Minitransplants for chronic myelogenous leukemia: A modeling perspective, in I. Queinnec (Ed.), Biology and Control Theory: Current Challenges, Lecture Notes in Control and Information Sciences, Vol. 357, Springer, Berlin, pp. 3–20.
[8] Langtangen, H.P. and Mardal, K.A. (2019). Introduction to Numerical Methods for Variational Problems, Springer, Cham.
[9] Parajdi, L.G. (2020). Stability of the equilibria of a dynamic system modeling stem cell transplantation, Ricerche di Matematica 69(2): 579–601.
[10] Parajdi, L.G., Patrulescu, F.O., Precup, R. and Haplea, I. ¸S. (2023). Two numerical methods for solving a nonlinear system of integral equations of mixed VolterraFredholm type arising from a control problem related to leukemia, Journal of Applied Analysis & Computation, DOI:10.11948/20220197, (online first).
[11] Precup, R. (2002). Methods in Nonlinear Integral Equations, Kluwer Academic Publishers, Dordrecht.
[12] Precup, R. (2022). On some applications of the controllability principle for fixed point equations, Results in Applied Mathematics 13: 100236.
[13] Precup, R., Dima, D., Tomuleasa, C., ¸Serban, M.A. and Parajdi, L.G. (2018). Theoretical models of hematopoietic cell dynamics related to bone marrow transplantation, in AttaurRahman and S. Anjum (Eds.), Frontiers in Stem Cell and Regenerative Medicine Research, Vol. 8, Bentham Science Publishers, Sharjah, pp. 202–241.
[14] Precup, R., ¸Serban, M.A. and Trif, D. (2013). Asymptotic stability for a model of cell dynamics after allogeneic bone marrow transplantation, Nonlinear Dynamics and Systems Theory 13(1): 79–92.
[15] Precup, R., ¸Serban, M.A., Trif, D. and Cucuianu, A. (2012). A planning algorithm for correction therapies after allogeneic stem cell transplantation, Journal of Mathematical Modelling and Algorithms 11(3): 309–323.
[16] Precup, R., Trif, D., ¸Serban, M.A. and Cucuianu, A. (2010). A mathematical approach to cell dynamics before and after allogeneic bone marrow transplantation, Annals of the Tiberiu Popoviciu Seminar of Functional Equations, Approximation and Convexity 8: 167–175.
[17] Rahmani Doust, M.H. (2015). The efficiency of harvested factor: Lotka–Volterra predatorprey model, Caspian Journal of Mathematical Sciences 4(1): 51–59.