Abstract
The paper deals with some control problems related to the Kolmogorov system for two interacting populations. For the first problem, the control acts in time over the per capita growth rates of the two populations in order for the ratio between their sizes to follow a prescribed evolution. For the second problem, the control is a constant which adjusts the per capita growth rate of a single population so that it reaches the desired size at a certain time. For the third problem the control acts on the growth rate of one of the populations in order that the total population to reach a prescribed level. The solution of the three problems is done within an abstract scheme, by using operatorbased techniques. Some examples come to illustrate the results obtained. One refers to a system that models leukemia, and another to the SIR model with vaccination.
Authors
Alexandru Hofman
Faculty of Mathematics and Computer Science, BabeşBolyai University, ClujNapoca, Romania
Radu Precup
Faculty of Mathematics and Computer Science and Institute of Advanced Studies in Science and Technology, BabeşBolyai University, ClujNapoca, Romania
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy, ClujNapoca
Keywords
Kolmogorov system, control problem, fixed point.
Paper coordinates
Al. Hofman, R. Precup, On some control problems for Kolmogorov type systems, Mathematical Modelling and Control, 2 (2022) no. 3, pp. 9099, http://doi.org/10.3934/mmc.2022011
About this paper
Journal
Mathematical Modelling and Control
Publisher Name
AIMS Press
Print ISSN
27678946
Online ISSN
google scholar link
Paper (preprint) in HTML form
On some control problems for Kolmogorov type systems
Abstract.
The paper deals with some control problems related to the Kolmogorov system for two interacting populations. For the first problem, the control acts in time over the per capita growth rates of the two populations in order for the ratio between their sizes to follow a prescribed evolution. For the second problem, the control is a constant which adjusts the per capita growth rate of a single population so that it reaches the desired size at a certain time. For the third problem the control acts on the growth rate of one of the populations in order that the total population to reach a prescribed level. The solution of the three problems is done within an abstract scheme, by using operatorbased techniques. Some examples come to illustrate the results obtained. One refers to a system that models leukemia, and another to the SIR model with vaccination.
Key words: Kolmogorov system, control problem, fixed point.
Mathematics Subject Classification: 34K35, 93C15
1. Introduction
The control of differential equations is the subject of numerous studies in the literature. Generally speaking, it consists in determining some of the parameters of the equation or system of equations so that the solution satisfies certain conditions, other than those imposed by the wellposed problems, such as the initial or boundary conditions ([2, p. 34]).
In [7] we have introduced a controllability principle for a general control problem related to operator equations, in the framework of fixed point theory. We reproduce it here for the convenience of the reader. It consists in finding $(w,\lambda ),$ a solution to the following system
(1.1)  $$\{\begin{array}{c}w={H}_{0}(w,\lambda ),\\ w\in W,\lambda \in \mathrm{\Lambda},(w,\lambda )\in \mathcal{D}\end{array}$$ 
associated to the fixed point equation $w={H}_{0}(w,\lambda ).$ Here $w$ is the state variable, $\lambda $ is the control variable, $W$ is the domain of the states, $\mathrm{\Lambda}$ is the domain of controls and $\mathcal{D}$ is the controllability domain, usually given by means of a certain condition/property imposed to $w,$ or to both $w$ and $\lambda .$ Notice the very general formulation of the control problem, in terms of sets, where $W,\mathrm{\Lambda}$ and $\mathcal{D}\subset W\times \mathrm{\Lambda}$ are not necessarily structured sets and ${H}_{0}$ is any mapping from $W\times \mathrm{\Lambda}$ to $W.$
In this context, we say that the equation $w={H}_{0}(w,\lambda )$ is controllable in $W\times \mathrm{\Lambda}$ with respect to $\mathcal{D},$ providing that problem (1.1) has a solution $(w,\lambda )$. If the solution is unique we say that the equation is uniquely controllable.
Let $\mathrm{\Sigma}$ be the set of all possible solutions $(w,\lambda )$ of the fixed point equation and ${\mathrm{\Sigma}}_{1}$ be the set of all $w$ that are first components of some solutions of the fixed point equation, that is
$\mathrm{\Sigma}$  $=$  $\{(w,\lambda )\in W\times \mathrm{\Lambda}:w={H}_{0}(w,\lambda )\},$  
${\mathrm{\Sigma}}_{1}$  $=$  $\{w\in W:\text{there is}\lambda \in \mathrm{\Lambda}\text{with}(w,\lambda )\in \mathrm{\Sigma}\}.$ 
Clearly, the set of all solutions of the control problem (1.1) is given by $\mathrm{\Sigma}\cap \mathcal{D}.$
Consider the setvalued map ${F}_{0}:{\mathrm{\Sigma}}_{1}\to \mathrm{\Lambda}$ defined as
$${F}_{0}\left(w\right)=\{\lambda \in \mathrm{\Lambda}:\text{}(w,\lambda )\in \mathrm{\Sigma}\cap \mathcal{D}\}.$$ 
Roughly speaking, ${F}_{0}$ gives the ‘expression’ of the control variable in terms of the state variable.
We have the following general principle for solving the control problem (1.1).
Proposition 1.1.
If for some extension $F:W\to \mathrm{\Lambda}$ of ${F}_{0}$ from ${\mathrm{\Sigma}}_{1}$ to $W,$ there exists a fixed point $w\in W$ of the setvalued map
$$H\left(w\right):={H}_{0}(w,F\left(w\right)),$$ 
i.e.,
(1.2)  $$w={H}_{0}(w,\lambda ),$$ 
for some $\lambda \in F\left(w\right),$ then the couple $(w,\lambda )$ is a solution of the control problem (1.1).
Proof.
Clearly $(w,\lambda )\in \mathrm{\Sigma}.$ Hence $w\in {\mathrm{\Sigma}}_{1}$ and so $F\left(w\right)={F}_{0}\left(w\right).$ Then $\lambda \in {F}_{0}\left(w\right)$ and from the definition of ${F}_{0},$ it follows that $(w,\lambda )\in \mathcal{D}.$ Therefore $(w,\lambda )$ solves (1.1).
The mappings ${F}_{0}$ and $F$ can in particular be singlevalued maps and in many cases the extension $F$ can be done using the expression of ${F}_{0}.$
From a theoretical perspective, the method leading to fixed point equations with composed operators is suitable to be related to advanced research in fixed point theory for singlevalued and multivalued operators, especially for operators of the decomposable type as considered in [15].
The applicability of this general principle is tested in [7] on a system modeling cell dynamics related to leukemia and in [16] on a control problem for the LotkaVolterra predatorprey system (see [12] for potential new applications). Also in [16] there are presented two problems apparently without a control but which can be treated as control problems accordingly to the above principle. The first is the Stokes system for which the control is given by the pressure and comes from the necessity to adjust the flow rate of the incompressible fluid through the porous medium, and the second one is a boundary value problem where the unknown value of the solution at some point takes over the function of a control variable in order for the solution to satisfy a boundary condition. It must be said that the fixed point method is known in the literature where it is applied to specific control problems related to various differential equations (see, e.g., [1], [3], [5], [8], [9], [11], the monograph [4] and the references therein).
The aim of this paper is to present some control problems related to the Kolmogorov system [10] that we investigate using the general operatorbased technique from above. Introduced as a generalization of the wellknown Volterra’s model in population dynamics (see [17]), Kolmogorov’s system takes into account general per capita rates of two interacting populations and reads as follows:
$$\{\begin{array}{c}{x}^{\prime}=xf(x,y),\hfill \\ {y}^{\prime}=yg(x,y).\hfill \end{array}$$ 
Such models also come from other fields, for example from economics, chemistry, biology and medicine, when the variables $x$ and $y$ can be attributed the meaning of density of some ‘quantities’ (species, populations, economic units, chemicals, medicines etc.) and when growth rates can be best understood per capita $({x}^{\prime}/x,{y}^{\prime}/y),$ or as logarithmic growth rates (${\left(\mathrm{ln}x\right)}^{\prime}={x}^{\prime}/x,$ ${\left(\mathrm{ln}y\right)}^{\prime}={y}^{\prime}/y$).
Naturally, when studying the interaction between two given quantities, the two rates $f$ and $g$ must be made explicit in terms of some parameters. Some of these parameters are specific to the two quantities and do not support changes, others can be influenced, even added, in order to control the evolution and achieve a desired balance.
2. Preliminaries
This section is devoted to a brief presentation of some notions and results that will be used in the next section. It is intended for those less familiar with the theoretical framework in which we place ourselves.
We say that an integral equation is of Volterra type if the involved integral is on a variable interval as is the case of an equation of the form
(2.1)  $$x\left(t\right)=\phi \left(t\right)+{\int}_{a}^{t}\psi (t,s,x\left(s\right))\mathit{d}s,t\in [a,b]$$ 
and that it is of Fredholm type if the involved integral is given on a fixed interval, as in the equation
$$x\left(t\right)=\phi \left(t\right)+{\int}_{a}^{b}\psi (t,s,x\left(s\right))\mathit{d}s,t\in [a,b].$$ 
In case that the equation involves both types of integral, we say that it is of VolterraFredholm type.
When dealing with Volterra type equations it is convenient that instead of the maxnorm on the space $C[a,b]$ given by $\Vert x\Vert ={\mathrm{max}}_{t\in [a,b]}\leftx\left(t\right)\right,$ to consider an equivalent norm defined by
$${\Vert x\Vert}_{\theta}=\underset{t\in [a,b]}{\mathrm{max}}\left(\leftx\left(t\right)\right{e}^{\theta \left(ta\right)}\right),$$ 
for some suitable number $\theta >0.$ Such a norm is called a Bielecki norm and it is equivalent to the maxnorm, as follows from the inequalities
$${e}^{\theta \left(b1\right)}\Vert x\Vert \le {\Vert x\Vert}_{\theta}\le \Vert x\Vert \phantom{\rule{2em}{0ex}}\left(x\in C[a,b]\right).$$ 
The trick of using Bielecki norms consists in the possibility to choose suitable large enough $\theta $ in order to make constants smaller, for example the Lipschitz constant of $\psi $ to guarantee the contraction property of the integral operator $N:C[a,b]\to C[a,b]$ given by the right side of equation (2.1). Indeed, if $\psi $ is such that
$$\left\psi (t,s,x)\psi (t,s,y)\right\le L\leftxy\right\text{for all}x,y\in \mathbb{R};\text{}t,s\in [a,b]$$ 
and some constant $L>0,$ then for any functions $x,y\in C[a,b],$ we have
$\leftN\left(x\right)\left(t\right)N\left(y\right)\left(t\right)\right$  $\le $  ${\int}_{a}^{t}}\left\psi (t,s,x\left(s\right))\psi (t,s,y\left(s\right))\right\mathit{d}s$  
$\le $  $L{\displaystyle {\int}_{a}^{t}}\leftx\left(s\right)y\left(s\right)\right\mathit{d}s.$ 
Furthermore
${\int}_{a}^{t}}\leftx\left(s\right)y\left(s\right)\right\mathit{d}s$  $=$  ${\int}_{a}^{t}}\leftx\left(s\right)y\left(s\right)\right{e}^{\theta \left(sa\right)}{e}^{\theta \left(sa\right)}\mathit{d}s$  
$\le $  ${\Vert xy\Vert}_{\theta}{\displaystyle {\int}_{a}^{t}}{e}^{\theta \left(sa\right)}\mathit{d}s$  
$=$  $\frac{1}{\theta}}{\Vert xy\Vert}_{\theta}\left({e}^{\theta \left(ta\right)}1\right)$  
$\le $  $\frac{1}{\theta}}{\Vert xy\Vert}_{\theta}{e}^{\theta \left(ta\right)}.$ 
It follows that
$$\leftN\left(x\right)\left(t\right)N\left(y\right)\left(t\right)\right\le \frac{L}{\theta}{\Vert xy\Vert}_{\theta}{e}^{\theta \left(ta\right)}.$$ 
Multiplying by ${e}^{\theta \left(ta\right)}$ and taking the maximum for $t\in [a,b]$ yields
$${\Vert N\left(x\right)N\left(y\right)\Vert}_{\theta}\le \frac{L}{\theta}{\Vert xy\Vert}_{\theta}.$$ 
Thus, choosing any $\theta >L$ we have that $N$ is a contraction with respect to the Bielecki norm $\parallel .{\parallel}_{\theta}.$
However, it can be observed that the above reasoning does not work if instead of the integral ${\int}_{a}^{t}$ one considers the integral ${\int}_{a}^{b}.$ Thus we may conclude that the trick based on Bielecki norms do not apply in case of Fredholm and VolterraFredholm integral equations.
We conclude these preliminaries by recalling two basic fixed point theorems which are used in this paper (see, e.g., [6] and [14]).
Theorem 2.1 (Banach contraction principle).
Let $(X,d)$ be a complete metric space and $N:X\to X$ be a contraction. Then $N$ has a unique fixed point $x,$ and ${N}^{k}\left(y\right)\to x$ as $k\to \mathrm{\infty}$ for each $y\in X.$
Theorem 2.2 (Schauder fixed point theorem).
Let $X$ be a Banach space, $D\subset X$ a nonempty convex bounded closed set and $N:D\to D$ be a completely continuous operator. Then $N$ has at least one fixed point.
3. Main Results
3.1. First control problem
Let us consider the following control problem for the general Kolmogorov system under initial conditions
(3.1)  $$\{\begin{array}{cc}{x}^{\prime}(t)=x(t)\left(f(x,y)\lambda (t)\right),\hfill & \\ {y}^{\prime}(t)=y(t)\left(g(x,y)c\lambda (t)\right),\hfill & \\ x(0)={x}_{0},y(0)={y}_{0},\hfill & \end{array}$$ 
where $\lambda (t)$ is the control function and $c$ is a positive correction factor, $c\ne 1$. We want to find a positive solution $(x,y)$ so that
(3.2)  $$\frac{y(t)}{x(t)}=r(t),$$ 
where $r$ is a given positive continuous function on some interval $[0,T].$
Thus the problem consists in finding how to change the per capita growth rates for the ratio of the two species to follow a desired evolution giving by $r\left(t\right)$ on a fixed time interval $[0,T].$ The correction factor $c$ expresses the fact that the effect of the control intervention on the two rates is manifested differently in the two species.
We have the following result.
Theorem 3.1.
Assume that $f,g\in {C}^{1}\left({\mathbb{R}}_{+}^{2}\right),r\in {C}^{1}[0,T],$ $r>0$ on $[0,T]$ and that the functions
(3.3)  $$x\cdot {f}_{x}(x,y),y\cdot {f}_{y}(x,y),x\cdot {g}_{x}(x,y),y\cdot {g}_{y}(x,y)$$ 
are bounded on ${\mathbb{R}}_{+}^{2}.$ Then the control problem (3.1) has a unique solution $(x,y,\lambda )$ with $x,y>0.$
Proof.
We look for a positive solution $(x,y),$ so we may take them under the form
$$x={e}^{u},y={e}^{v}.$$ 
Then the controllability condition (3.2) becomes
(3.4)  $$v(t)u(t)=\mathrm{ln}r(t)$$ 
and system (3.1) reduces to
$$\{\begin{array}{cc}{u}^{\prime}(t)=f({e}^{u\left(t\right)},{e}^{v\left(t\right)})\lambda (t),\hfill & \\ {v}^{\prime}(t)=g({e}^{u\left(t\right)},{e}^{v\left(t\right)})c\lambda (t),\hfill & \\ u(0)={u}_{0},v(0)={v}_{0},\hfill & \end{array}$$ 
where ${u}_{0}:={e}^{{x}_{0}},$ ${v}_{0}:={e}^{{y}_{0}}.$ The problem is now equivalent to the integral system
(3.5)  $$\{\begin{array}{c}u(t)={u}_{0}+{\int}_{0}^{t}f({e}^{u(s)},{e}^{v(s)})\mathit{d}s{\int}_{0}^{t}\lambda (s)\mathit{d}s,\hfill \\ v(t)={v}_{0}+{\int}_{0}^{t}g({e}^{u(s)},{e}^{v(s)})\mathit{d}sc{\int}_{0}^{t}\lambda (s)\mathit{d}s.\hfill \end{array}$$ 
Replacing in the controllability condition (3.4) yields the expression on ${\int}_{0}^{t}\lambda \left(s\right)\mathit{d}s,$ namely
(3.6)  $\int}_{0}^{t}}\lambda (s)\mathit{d}s={\displaystyle \frac{{u}_{0}{v}_{0}+\mathrm{ln}r\left(t\right)}{1c$  
$+{\displaystyle \frac{1}{1c}}{\displaystyle {\int}_{0}^{t}}\left(f({e}^{u(s)},{e}^{v(s)})g({e}^{u(s)},{e}^{v(s)})\right)\mathit{d}s$ 
which by differentiation gives the form of the control function in terms of the state variables
(3.7)  $$\lambda (t)=\frac{{r}^{\prime}(t)}{(1c)r(t)}+\frac{1}{1c}\left(f({e}^{u(t)},{e}^{v(t)})g({e}^{u(t)},{e}^{v(t)})\right).$$ 
Using (3.6) in (3.5) we obtain the fixed point equations
$$\{\begin{array}{c}u\left(t\right)=\frac{{v}_{0}c{u}_{0}\mathrm{ln}r(t)}{1c}\hfill \\ +\frac{1}{1c}{\int}_{0}^{t}(cf({e}^{u(s)},{e}^{v(s)})+g({e}^{u(s)},{e}^{v(s)}))\mathit{d}s,\hfill \\ v\left(t\right)=\frac{{v}_{0}c{u}_{0}c\mathrm{ln}r(t)}{1c}\hfill \\ +\frac{1}{1c}{\int}_{0}^{t}(cf({e}^{u(s)},{e}^{v(s)})+g({e}^{u(s)},{e}^{v(s)}))\mathit{d}s.\hfill \end{array}$$ 
Consider the operators $A$ and $B$ given by
(3.8)  $A(u,v)(t)={\displaystyle \frac{{v}_{0}c{u}_{0}\mathrm{ln}r(t)}{1c}}$  
$+{\displaystyle \frac{1}{1c}}{\displaystyle {\int}_{0}^{t}}(cf({e}^{u(s)},{e}^{v(s)})+g({e}^{u(s)},{e}^{v(s)}))\mathit{d}s,$  
$B(u,v)(t)={\displaystyle \frac{{v}_{0}c{u}_{0}c\mathrm{ln}r(t)}{1c}}$  
$+{\displaystyle \frac{1}{1c}}{\displaystyle {\int}_{0}^{t}}(cf({e}^{u(s)},{e}^{v(s)})+g({e}^{u(s)},{e}^{v(s)}))\mathit{d}s.$ 
We apply Banach’s fixed point theorem to the operator $H=(A,B)$ on the space $C([0,T];{\mathbb{R}}^{2})$ endowed with a suitable Bielecki norm. This is possible since the integral equations are of Volterra type and the functions $f({e}^{u},{e}^{v})$ and $g({e}^{u},{e}^{v})$ are Lipschitz continuous on the whole space ${\mathbb{R}}^{2}.$ Indeed, their partial derivatives are
$$\frac{\partial f({e}^{u},{e}^{v})}{\partial u}=\frac{\partial f({e}^{u},{e}^{v})}{\partial x}\cdot {e}^{u},\frac{\partial f({e}^{u},{e}^{v})}{\partial v}=\frac{\partial f({e}^{u},{e}^{v})}{\partial y}\cdot {e}^{v}$$ 
and the similar ones for $g,$ and they are bounded based on our assumption on functions (3.3).
We now show that the operators $A$ and $B$ are Lipschitz continuous with respect to a suitable Bielecki norm so that $H=(A,B)$ is a contraction. Let $u,v,\overline{u},\overline{v}\in C[0,T]$ be arbitrary. We have
$\leftA(u,v)(t)A(\overline{u},\overline{v})(t)\right$  
$\le $  $\frac{c}{1c}}{\displaystyle {\int}_{0}^{t}}\leftf({e}^{u\left(s\right)},{e}^{v\left(s\right)})f({e}^{\overline{u}\left(s\right)},{e}^{\overline{v}\left(s\right)})\right\mathit{d}s$  
$+{\displaystyle \frac{1}{1c}}{\displaystyle {\int}_{0}^{t}}\leftg({e}^{u\left(s\right)},{e}^{v\left(s\right)})g({e}^{\overline{u}\left(s\right)},{e}^{\overline{v}\left(s\right)})\right\mathit{d}s.$ 
If we denote by $M$ a bound for the absolute value of functions (3.3), then using Lagrange’s mean value theorem we find
${\int}_{0}^{t}}\leftf({e}^{u\left(s\right)},{e}^{v\left(s\right)})f({e}^{\overline{u}\left(s\right)},{e}^{\overline{v}\left(s\right)})\right\mathit{d}s$  
$\le $  ${\int}_{0}^{t}}\left(\leftf({e}^{u},{e}^{v})f({e}^{\overline{u}},{e}^{v})\right+\leftf({e}^{\overline{u}},{e}^{v})f({e}^{\overline{u}},{e}^{\overline{v}})\right\right)\mathit{d}s$  
$\le $  $M{\displaystyle {\int}_{0}^{t}}\left(\leftu\left(s\right)\overline{u}\left(s\right)\right+\leftv(s)\overline{v}(s)\right\right)\mathit{d}s.$ 
Now for a positive number $\theta $ we introduce the Bielecki norm $\parallel \cdot {\parallel}_{\theta}$ on $C[0,T]$ given by
$${\Vert u\Vert}_{\theta}=\underset{t\in [0,T]}{\mathrm{max}}\left(\leftu\left(t\right)\right{e}^{\theta t}\right),$$ 
and a similar norm on $C([0,T];{\mathbb{R}}^{2})$ defined by
$${\Vert (u,v)\Vert}_{\theta}={\Vert u\Vert}_{\theta}+{\Vert v\Vert}_{\theta}.$$ 
Then
${\int}_{0}^{t}}\left(\leftu\left(s\right)\overline{u}\left(s\right)\right+\leftv(s)\overline{v}(s)\right\right)\mathit{d}s$  
$=$  ${\int}_{0}^{t}}\left(\leftu\left(s\right)\overline{u}\left(s\right)\right{e}^{\theta s}{e}^{\theta s}+\leftv(s)\overline{v}(s)\right{e}^{\theta s}{e}^{\theta s}\right)\mathit{d}s$  
$\le $  $\left({\Vert u\overline{u}\Vert}_{\theta}+{\Vert v\overline{v}\Vert}_{\theta}\right){\displaystyle {\int}_{0}^{t}}{e}^{\theta s}\mathit{d}s$  
$\le $  $\frac{1}{\theta}}\left({\Vert u\overline{u}\Vert}_{\theta}+{\Vert v\overline{v}\Vert}_{\theta}\right){e}^{\theta t$  
(3.12)  $=$  $\frac{1}{\theta}}{\Vert (u,v)(\overline{u},\overline{v})\Vert}_{\theta}{e}^{\theta t}.$ 
A similar estimate is valid for $g.$ Now (3.1), (3.1) and (3.1) give
$$\leftA(u,v)(t)A(\overline{u},\overline{v})(t)\right\le \frac{M}{\theta}\frac{c+1}{1c}{\Vert (u,v)(\overline{u},\overline{v})\Vert}_{\theta}{e}^{\theta t}.$$ 
Dividing by ${e}^{\theta t}$ and taking the maximum for $t\in [0,T]$ gives
$${\Vert A(u,v)A(\overline{u},\overline{v})\Vert}_{\theta}\le \frac{M}{\theta}\frac{c+1}{1c}{\Vert (u,v)(\overline{u},\overline{v})\Vert}_{\theta}.$$ 
Similarly
$${\Vert B(u,v)B(\overline{u},\overline{v})\Vert}_{\theta}\le \frac{M}{\theta}\frac{c+1}{1c}{\Vert (u,v)(\overline{u},\overline{v})\Vert}_{\theta}.$$ 
Summing up we obtain
(3.13)  $${\Vert H(u,v)H(\overline{u},\overline{v})\Vert}_{\theta}\le \frac{2M}{\theta}\frac{c+1}{1c}{\Vert (u,v)(\overline{u},\overline{v})\Vert}_{\theta}.$$ 
Hence choosing a large enough $\theta ,$ namely $\theta >2M\left(c+1\right)/\left1c\right,$ the operator $H$ becomes a contraction on the space $C([0,T];{\mathbb{R}}^{2})$ endowed with the norm $\parallel \cdot {\parallel}_{\theta}.$The conclusion now follows from Banach contraction theorem.
If the hypothesis on the boundedness of functions (3.3) is removed, we however have the following result.
Theorem 3.2.
Assume that $f,g\in {C}^{1}\left({\mathbb{R}}_{+}^{2}\right),r\in {C}^{1}[0,T]$ , $r>0$ on $[0,T]$ and that the function
(3.14)  $$\frac{c}{1c}f(x,y)+\frac{1}{1c}g(x,y)$$ 
is bounded above on ${\mathbb{R}}_{+}^{2}.$ Then the control problem (3.1) has a unique solution $(x,y,\lambda )$ with $x,y>0.$
Proof.
Step 1: Existence and uniqueness in a subset. We shall use Banach contraction theorem, this time in a closed subset of $C([0,T];{\mathbb{R}}^{2}),$ again with respect to a Bielecki norm. As in the proof of the previous theorem, we have to find a fixed point $(u,v)$ of the operator $H=(A,B),$ where $A$ and $B$ are given by (3.8). Let ${M}_{0}$ be an upper bound of function (3.14). Then there is a number $\rho >0$ such that for every $u,v\in C[0,T]$ and $t\in [0,T],$ the following inequalities hold:
(3.15)  $A(u,v)(t)$  $\le $  $\frac{\left{v}_{0}c{u}_{0}\mathrm{ln}r(t)\right}{\left1c\right}}+{M}_{0}T\le \rho ,$  
$B(u,v)(t)$  $\le $  $\frac{\left{v}_{0}c{u}_{0}c\mathrm{ln}r(t)\right}{\left1c\right}}+{M}_{0}T\le \rho .$ 
Thus, denoting
$${D}_{\rho}:=\{(u,v)\in C([0,T];{\mathbb{R}}^{2}):u,v\le \rho \text{on}[0,T]\},$$ 
we have $H\left({D}_{\rho}\right)\subset {D}_{\rho}.$ Hence there is a chance to apply Banach contraction theorem to the operator $H$ on the closed subset ${D}_{\rho}$ of $C([0,T];{\mathbb{R}}^{2}).$ It remains to guarantee the contraction property for $H.$ The functions (3.3) being continuous they are bounded for $x,y\in [0,{e}^{\rho}].$ Let $M$ be their bound. Then, the estimation in (3.1) is valid for any couple $(u,v)\in {D}_{\rho}$ and consequently contraction inequality (3.13) can be obtained in the same way for a large enough $\theta .$ Thus, with respect to the metric on ${D}_{\rho}$ induced by the norm $\parallel \cdot {\parallel}_{\theta}$ on $C([0,T];{\mathbb{R}}^{2}),$ the operator $H$ is a contraction. Therefore Banach contraction theorem applies and proves the existence and uniqueness of the solution in ${D}_{\rho}.$ It remains to prove that this solution does not depend on the choice of the bound $\rho .$
Step 2: Uniqueness. The above reasoning is valid for any number $\rho $ sufficiently large that inequalities (3.15) hold. Thus according to the result at Step 1, the solution obtained in ${D}_{\stackrel{~}{\rho}}$ for any $\stackrel{~}{\rho}$ larger than $\rho $ must coincide with the solution obtained in ${D}_{\rho}.$ Thus the solution $(u,v)$ is unique.
3.2. Second control problem
We consider the problem of controllability of the Kolmogorov system
(3.16)  $$\{\begin{array}{cc}{x}^{\prime}(t)=x(t)[f(x,y)\lambda ],\hfill & \\ {y}^{\prime}(t)=y(t)\cdot g(x,y),\hfill & \\ x(0)={x}_{0},y(0)={y}_{0},\hfill & \end{array}$$ 
where $\lambda $ is constant. We want to find a solution so that $x(T)={x}_{1}$.
Thus the problem is to change constantly the per capita rate of only one of the two populations for it to reach a desired threshold in a given time.
Theorem 3.3.
Let $f,g\in C\left({\mathbb{R}}_{+}^{2}\right).$
 (a):

If $f$ and $g$ are bounded on ${\mathbb{R}}_{+}^{2},$ then for every $T>0,$ the control problem has a solution $(x,y,\lambda )$ with $x,y>0.$
 (b):

If ${x}_{0},{y}_{0},{x}_{1}\ge 1,$ then for each${\rho}_{0}>\mathrm{max}\{{x}_{0},{x}_{1},{y}_{0}\},$ there exists ${T}_{{\rho}_{0}}>0$ such that for any $T\in (0,{T}_{{\rho}_{0}}],$ the control problem has a solution $(x,y,\lambda )$ with $$
Proof.
Here again looking for positive solutions we let
$$x={e}^{u},y={e}^{v}$$ 
and we denote ${u}_{0}=u(0)=\mathrm{ln}x(0),$ ${v}_{0}=v(0)=\mathrm{ln}y(0)$ and ${u}_{1}=\mathrm{ln}{x}_{1}.$ Making substitution and integration yields the Volterratype integral system
$$\{\begin{array}{c}u(t)={u}_{0}+{\int}_{0}^{t}f({e}^{u(s)},{e}^{v(s)})\mathit{d}s\lambda t,\hfill \\ v(t)={v}_{0}+{\int}_{0}^{t}g({e}^{u(s)},{e}^{v(s)})\mathit{d}s\hfill \end{array}$$ 
and using the controllability condition $x(T)={x}_{1}$ gives the expression of the control parameter in terms of the state variables,
$$\lambda =\frac{1}{T}\left({u}_{0}{u}_{1}+{\int}_{0}^{T}f({e}^{u(s)},{e}^{v(s)})\mathit{d}s\right).$$ 
Thus we arrive to the VolterraFredholm type integral system
$$\{\begin{array}{c}u\left(t\right)={u}_{0}+{\int}_{0}^{t}f({e}^{u(s)},{e}^{v(s)})\mathit{d}s\hfill \\ \frac{t}{T}\left({u}_{0}{u}_{1}+{\int}_{0}^{T}f({e}^{u(s)},{e}^{v(s)})\mathit{d}s\right),\hfill \\ v(t)={v}_{0}+{\int}_{0}^{t}g({e}^{u(s)},{e}^{v(s)})\mathit{d}s,\hfill \end{array}$$ 
which can be seen as a fixed point equation in $C([0,T];{\mathbb{R}}^{2})$ for the operator $H=(A,B),$ where
$A(u,v)\left(t\right)$  $=$  $\left(1{\displaystyle \frac{t}{T}}\right){u}_{0}$  
$+$  $\frac{t}{T}}{u}_{1}+\left(1{\displaystyle \frac{t}{T}}\right){\displaystyle {\int}_{0}^{t}}f({e}^{u(s)},{e}^{v(s)})\mathit{d}s$  
$$  $\frac{t}{T}}{\displaystyle {\int}_{t}^{T}}f({e}^{u(s)},{e}^{v(s)})\mathit{d}s,$  
$B(u,v)\left(t\right)$  $=$  ${v}_{0}+{\displaystyle {\int}_{0}^{t}}g({e}^{u(s)},{e}^{v(s)})\mathit{d}s.$ 
The system being of VolterraFredholm type, the Bielecki technique of equivalent norms does not apply. Thus in $C[0,T],$ we are forced to use the maxnorm $\Vert u\Vert ={\mathrm{max}}_{t\in [0,T]}\leftu\left(t\right)\right.$
In virtue of the ArzelàAscoli theorem, the operator $H$ is completely continuous.
(a) Let $M>0$ be such that $\leftf(x,y)\right,\leftg(x,y)\right\le M$ for all $x,y\in {\mathbb{R}}_{+}.$ Then using the fact that a convex combination of any nonnegative numbers is less or equal than their maximum, we obtain
$\Vert A(u,v)\Vert $  $\le $  $\mathrm{max}\{\left{u}_{0}\right,\left{u}_{1}\right\}+TM,$  
$\Vert B(u,v)\Vert $  $\le $  $\left{v}_{0}\right+TM.$ 
Hence, if $\rho :=\mathrm{max}\{\left{u}_{0}\right,\left{u}_{1}\right,\left{v}_{0}\right\}+TM,$ and
$${D}_{\rho}:=\{(u,v)\in C([0,T]:{\mathbb{R}}_{+}^{2}):\parallel u\parallel ,\parallel v\parallel \le \rho \},$$ 
then $H\left({D}_{\rho}\right)\subset {D}_{\rho}$ and Schauder’s fixed point theorem applies and gives the result.
(b) Let $M$ be such that $\leftf(x,y)\right,\leftg(x,y)\right\le M$ for all $x,y\in [0,{\rho}_{0}]$ and let $\rho :=\mathrm{ln}{\rho}_{0}.$ Obviously $\rho >0.$ The invariance condition $H\left({D}_{\rho}\right)\subset {D}_{\rho}$ still holds provided that
$$\mathrm{max}\{\left{u}_{0}\right,\left{u}_{1}\right\}+TM\le \rho ,\left{v}_{0}\right+TM\le \rho ,$$ 
that is $\mathrm{max}\{\left{u}_{0}\right,\left{u}_{1}\right,\left{v}_{0}\right\}+TM\le \rho ,$ which happens for
$$T\le {T}_{{\rho}_{0}}:=\frac{1}{M}\left(\rho \mathrm{max}\{\left{u}_{0}\right,\left{u}_{1}\right,\left{v}_{0}\right\}\right).$$ 
The result follows again from Schauder’s fixed point theorem.
3.3. Third control problem
The problem consists in changing the growth rate (not the per capita rate) of one of the two populations so that at time $T,$ the total population reaches a desired level $\gamma .$More exactly we consider the problem
(3.17)  $$\{\begin{array}{c}{x}^{\prime}(t)=x(t)f(x(t),y(t))\lambda ,\hfill \\ {y}^{\prime}(t)=y(t)g(x(t),y(t)),\hfill \\ x(0)={x}_{0},y(0)={y}_{0},\hfill \\ x(T)+y(T)=\gamma .\hfill \end{array}$$ 
Theorem 3.4.
Let $\rho >\mathrm{max}\{\left{x}_{0}\right,\left{y}_{0}\right,\left{y}_{0}\gamma \right\};$ $f,g\in {C}^{1}\left({[\rho ,\rho ]}^{2}\right);$ ${M}_{\rho}$ a bound of $\leftxf(x,y)\right,$ $\leftyg(x,y)\right$ on ${[\rho ,\rho ]}^{2};$ and ${\overline{M}}_{\rho}$ a bound of the absolute value of the partial derivatives of the functions $xf(x,y),yg(x,y)$ on ${[\rho ,\rho ]}^{2}.$ If $T$ is such that
(3.18)  $$ 
then the control problem has a unique solution $(x,y,\lambda )$ with $\leftx\right,\lefty\right\le \rho .$
Proof.
Integration leads to the integral system
(3.19)  $$\{\begin{array}{c}x(t)={x}_{0}+{\int}_{0}^{t}x(s)f(x(s),y(s))\mathit{d}s\lambda t,\hfill \\ y(t)={y}_{0}+{\int}_{0}^{t}y(s)g(x(s),y(s))\mathit{d}s.\hfill \end{array}$$ 
Using the controllability condition we find the expression of $\lambda ,$ namely
$\lambda $  $=$  $\frac{1}{T}}\left({x}_{0}+{y}_{0}\gamma \right)$  
$+{\displaystyle \frac{1}{T}}{\displaystyle {\int}_{0}^{T}}\left(x\left(s\right)f(x\left(s\right),y\left(s\right))+y\left(s\right)g(x\left(s\right),y\left(s\right))\right)\mathit{d}s.$ 
Replacing $\lambda $ by the expression given by (3.3) in (3.19) we obtain a VolterraFredholm integral system which can be seen as a fixed point equation in $C([0,T];{\mathbb{R}}^{2})$ for the operator $H=(A,B),$ where
$A(x,y)\left(t\right)$  $=$  $\left(1{\displaystyle \frac{t}{T}}\right){x}_{0}{\displaystyle \frac{t}{T}}\left({y}_{0}\gamma \right)$  
$+\left(1{\displaystyle \frac{t}{T}}\right){\displaystyle {\int}_{0}^{t}}x(s)f(x(s),y(s))\mathit{d}s$  
${\displaystyle \frac{t}{T}}{\displaystyle {\int}_{t}^{T}}x(s)f(x(s),y(s))\mathit{d}s$  
${\displaystyle \frac{t}{T}}{\displaystyle {\int}_{0}^{T}}y(s)g(x(s),y(s))\mathit{d}s,$  
$B(x,y)\left(t\right)$  $=$  ${y}_{0}+{\displaystyle {\int}_{0}^{t}}y(s)g(x(s),y(s))\mathit{d}s.$ 
The system being of VolterraFredholm type, the Bielecki technique of equivalent norms does not apply. Thus in $C[0,T],$ we are forced to use the maxnorm $\Vert x\Vert ={\mathrm{max}}_{t\in [0,T]}\leftx\left(t\right)\right$ and in $C([0,T];{\mathbb{R}}^{2}),$ the norm $\Vert (x,y)\Vert =\Vert x\Vert +\Vert y\Vert .$
We shall apply Banach contraction theorem on the set
$${D}_{\rho}:=\{(x,y)\in C([0,T];{\mathbb{R}}^{2}):\Vert x\Vert ,\Vert y\Vert \le \rho \}.$$ 
To this aim we have to guarantee (a) the invariance condition $H\left({D}_{\rho}\right)\subset {D}_{\rho}$ and (b) the contraction property of $H$ in ${D}_{\rho}.$
(a) For any $(x,y)\in {D}_{\rho}$ we have
$\leftA(x,y)\left(t\right)\right$  $\le $  $\mathrm{max}\{\left{x}_{0}\right,\left{y}_{0}\gamma \right\}+2T{M}_{\rho},$  
$\leftB(x,y)\left(t\right)\right$  $\le $  $\left{y}_{0}\right+T{M}_{\rho}.$ 
Hence the condition $H\left({D}_{\rho}\right)\subset {D}_{\rho}$ is satisfied provided that
$$\mathrm{max}\{\left{x}_{0}\right,\left{y}_{0}\gamma \right\}+2T{M}_{\rho}\le \rho ,\left{y}_{0}\right+T{M}_{\rho}\le \rho .$$ 
(b) For any $(x,y),(\overline{x},\overline{y})\in {D}_{\rho},$ using estimates of the following type
${\int}_{0}^{t}}\leftx(s)f(x(s),y(s))\overline{x}\left(s\right)f(\overline{x}\left(s\right),\overline{y}\left(s\right))\right\mathit{d}s$  
$\le $  ${\int}_{0}^{t}}\leftx(s)f(x(s),y(s))\overline{x}\left(s\right)f(\overline{x}\left(s\right),y\left(s\right))\right\mathit{d}s$  
$+{\displaystyle {\int}_{0}^{t}}\left\overline{x}\left(s\right)f(\overline{x}\left(s\right),y\left(s\right))\overline{x}\left(s\right)f(\overline{x}\left(s\right),\overline{y}\left(s\right))\right\mathit{d}s$  
$\le $  $T{\overline{M}}_{\rho}\left(\Vert x\overline{x}\Vert +\Vert y\overline{y}\Vert \right),$ 
we deduce that
$\Vert A(x,y)A(\overline{x},\overline{y})\Vert $  $\le $  $2T{\overline{M}}_{\rho}\left(\Vert x\overline{x}\Vert +\Vert y\overline{y}\Vert \right),$  
$\Vert B(x,y)B(\overline{x},\overline{y})\Vert $  $\le $  $T{\overline{M}}_{\rho}\left(\Vert x\overline{x}\Vert +\Vert y\overline{y}\Vert \right).$ 
Hence
$$\Vert H(x,y)H(\overline{x},\overline{y})\Vert \le 3T{\overline{M}}_{\rho}\Vert (x,y)(\overline{x},\overline{y})\Vert .$$ 
Thus $H$ is a contraction on ${D}_{\rho}$ if $$
Therefore Banach contraction theorem applies and gives the result.
4. Applications
4.1. Example 1
This example illustrates Theorem 3.1. Consider the following selflimiting system
$$\{\begin{array}{c}{x}^{\prime}=\left(\frac{a}{1+{x}^{2}+{y}^{2}}\lambda (t)\right)x,\hfill \\ {y}^{\prime}=\left(\frac{b}{1+{x}^{2}+{y}^{2}}\lambda (t)c\right)y\hfill \end{array}$$ 
under the control condition (3.2).
Here
$$f(x,y)=\frac{a}{1+{x}^{2}+{y}^{2}},g(x,y)=\frac{b}{1+{x}^{2}+{y}^{2}}$$ 
and functions (3.3) are
$x\cdot {f}_{x}(x,y)$  $=$  ${\displaystyle \frac{2a{x}^{2}}{{\left(1+{x}^{2}+{y}^{2}\right)}^{2}}},$  
$y\cdot {f}_{y}(x,y)$  $=$  ${\displaystyle \frac{2a{y}^{2}}{{\left(1+{x}^{2}+{y}^{2}\right)}^{2}}},$  
$x\cdot {g}_{x}(x,y)$  $=$  ${\displaystyle \frac{2b{x}^{2}}{{\left(1+{x}^{2}+{y}^{2}\right)}^{2}}},$  
$y\cdot {g}_{y}(x,y)$  $=$  ${\displaystyle \frac{2b{y}^{2}}{{\left(1+{x}^{2}+{y}^{2}\right)}^{2}}}.$ 
Obviously their absolute values are bounded on ${\mathbb{R}}_{+}^{2}$ by $2\lefta\right$ and $2\leftb\right,$ respectively.
4.2. Example 2
Consider the following control problem related to a system modelling leukemia introduced in [13],
$$\{\begin{array}{c}{x}^{\prime}=\left(a\left(1\frac{gx+y)}{A}\right)\lambda (t)\right)x,\hfill \\ {y}^{\prime}=\left(b\left(1\frac{x+y}{B}\right)c\lambda (t)\right)y,\hfill \end{array}$$ 
with $$ $$ $g\ge 1$ and $A,B>0,$ again under the control condition (3.2) expressing the desired evolution of the ratio between the density $y\left(t\right)$ of leukemic cells and the density $x\left(t\right)$ of healthy cells over a period of time. The problem is motivated by the need to develop a treatment scheme for chronic leukemia patients.
Here
$$f(x,y)=a\left(1\frac{gx+y}{A}\right),g(x,y)=b\left(1\frac{x+y}{B}\right),$$ 
for which obviously, the boundedness condition on functions (3.3) does not hold. However,
$$cf(x,y)+g(x,y)=bac\left(\frac{b}{B}\frac{acg}{A}\right)x\left(\frac{b}{B}\frac{ac}{A}\right)y,$$ 
which is bounded above on ${\mathbb{R}}_{+}^{2}$ by $bac,$ if
(4.1)  $$\frac{acg}{A}\le \frac{b}{B}.$$ 
Thus, according to Theorem 3.2, if condition (4.1) holds, then the system is controllable. Solving numerically the problem yields an approximation of the control function $\lambda \left(t\right)$ which can be put in connexion with the dose of medicine.
4.3. Example 3
We consider another example of Kolmogorov system, namely the wellknown SIR epidemiologic model
$$\{\begin{array}{c}{S}^{\prime}=aSI,\hfill \\ {I}^{\prime}=aSIbI,\hfill \\ {R}^{\prime}=bI.\hfill \end{array}$$ 
Here $S(t),I\left(t\right)$ and $R\left(t\right)$ are the numbers of susceptible, infectious and recovered/immunized individuals at time $t,$ respectively, in a closed population of size $N.$ Hence $S(t)+I(t)+R(t)=N,$ which allows to reduce the study to the bidimensional system
$$\{\begin{array}{c}{S}^{\prime}=aSI,\hfill \\ {I}^{\prime}=aSIbI.\hfill \end{array}$$ 
Let ${S}_{0},{I}_{0}$ and ${R}_{0}=N\left({S}_{0}+{I}_{0}\right)$ be the initial values of the three functions.
Introducing a constant vaccination rate $\lambda ,$ the system becomes
$$\{\begin{array}{c}{S}^{\prime}=aSI\lambda ,\hfill \\ {I}^{\prime}=aSIbI.\hfill \end{array}$$ 
The control problem consists in finding the vaccination rate $\lambda $ so that at time $T,$ the size of immunized population is $pN$ for a target value $p\in (0,1),$ that is
$$S\left(T\right)+I\left(T\right)=\left(1p\right)N.$$ 
This is a particular case of the general control problem (3.17). Here $\rho =N,$ $\gamma =\left(1p\right)N,x=S,$ $y=I,$ $f(S,I)=aI$ and $g(S,I)=aSb.$ Simple calculation shows that ${M}_{N}={\overline{M}}_{N}=aN+b.$ Thus Theorem 3.4 guarantees that the system is uniquely controllable in time $T$ if $T$ is small enough in the sense of inequalities (3.18). However, if an upper bound $\overline{\lambda}$ for the vaccination rate $\lambda $ is imposed, then a lower bound of $T$ is also required. Indeed, from (3.3), since $I\le N,$ we have
$\overline{\lambda}$  $\ge $  $\lambda $  
$=$  $\frac{1}{T}}\left({S}_{0}+{I}_{0}\left(1p\right)N\right)$  
$+$  $\frac{1}{T}}{\displaystyle {\int}_{0}^{T}}\left(aSI+\left(aSb\right)I\right)\mathit{d}s$  
$=$  $\frac{1}{T}}\left({S}_{0}+{I}_{0}\left(1p\right)N\right){\displaystyle \frac{b}{T}}{\displaystyle {\int}_{0}^{T}}I\mathit{d}s$  
$\ge $  $\frac{1}{T}}\left({S}_{0}+{I}_{0}\left(1p\right)N\right)bN$  
$=$  $\frac{1}{T}}\left(pN{R}_{0}\right)bN,$ 
whence
$$T\ge \frac{pN{R}_{0}}{bN+\overline{\lambda}}.$$ 
5. Conclusions
Through this work the controllability of the general Kolmogorov system that models the interaction of two populations was analyzed in three situations: when control is exercised over time on both per capita growth rates and when it is a constant with effect only on one of the populations, either on its per capita rate, or on its general growth rate. The analysis was performed in the unitary framework provided by the abstract scheme of controllability of fixed point equations, recently formulated by the second author.
From the perspective of those readers interested in applications, the three examples of control problems may suggest the wide applicability of our method to control various models from applied mathematics.
From a theoretical perspective, the method leading to operator equations with composed mappings is suitable to be related to advanced research in fixed point theory for singlevalued and multivalued operators, especially for operators of the decomposable type.
From a computational point of view, leading to integral equations of the Volterra or Fredholm type, the method is suitable to be completed by numerical results and approximation schemes.
Acknowledgment
We would like to thank the reviewers for valuable comments and suggestions towards improving our manuscript.
Conflict of interest
The authors declare that they have no conflicts of interest to this work.
References
 [1] K. Balachandran, J. P. Dauer, Controllability of nonlinear systems via fixedpoint theorems, Journal of Optimization Theory and Applications, 53 (1987), 345352.
 [2] V. Barbu, Mathematical methods in optimization of differential systems, Dordrecht: Springer Science+Business Media, 1994.
 [3] N. Carmichael, M. D. Quinn, Fixed point methods in nonlinear control, In: F. Kappel, K. Kunisch, W. Schappacher (eds) Distributed parameter systems, Lecture Notes in Control and Information Sciences, vol 75, Berlin: Springer, 1985.
 [4] J.M. Coron, Control and nonlinearity, Mathematical Surveys and Monographs Vol. 136, Providence: Amer. Math. Soc., 2007.
 [5] L. Górniewicz, S. K. Ntouyas, D. O’Regan, Controllability of semilinear differential equations and inclusions via semigroup theory in Banach spaces, Reports on Mathematical Physics, 56 (2005), 437470.
 [6] A. Granas, J. Dugundji, Fixed point theory, New York: Springer, 2003.
 [7] I.Ş. Haplea, L.G. Parajdi, R. Precup, On the controllability of a system modeling cell dynamics related to leukemia, Symmetry, 13 (2021), 1867.
 [8] J. Klamka, Schauder’s fixedpoint theorem in nonlinear controllability problems, Control and Cybernetics, 29 (2000), 153165.
 [9] J. Klamka, A. Babiarz, M. Niezabitowski, Banach fixedpoint theorem in semilinear controllability problems  a survey, Bulletin of the Polish Academy of Sciences Technical Sciences, 64, No. 1, 2016, 2135.
 [10] A. N. Kolmogorov, Sulla teoria di Volterra della lotta per l’esistenza, Giornale dell’Istituto Italiano degli Attuari, 7 (1936), 7480.
 [11] H. Leiva, Rothe’s fixed point theorem and controllability of semilinear nonautonomous systems, Systems and Control Letters, 67 (2014), 1418.
 [12] M. E. M. Meza, A, Bhaya, E. Kaszkurewicz, Controller design techniques for the LotkaVolterra nonlinear system, Sba: Controle and Automaç$\stackrel{~}{a}$, 16 (2005), 124135.
 [13] B. Neiman, A mathematical model of chronic myelogenous leukemia, Oxford: Oxford University, 2000.
 [14] R. Precup, Methods in nonlinear integral equations, Dordrecht: Springer Science+Business Media, 2002.
 [15] R. Precup, Fixed point theorems for decomposable multivalued maps and applications, Zeitschrift f$\ddot{u}$r Analysis und ihre Anwendungen, 22 (2003), 843861.
 [16] R. Precup, On some applications of the controllability principle for fixed point equations, Results in Applied Mathematics, 13 (2022), 100236.
 [17] K. Sigmund, Kolmogorov and population dynamics, In: É. Charpentier, A. Lesne, N. K. Nikolski (eds) Kolmogorov’s heritage in mathematics, Berlin: Springer, 2007.
[1] V. Barbu, Mathematical methods in optimization of differential systems, Dordrecht: Springer Science+Business Media, 1994.
[2] I.Ş. Haplea, L.G. Parajdi, R. Precup, On the controllability of a system modeling cell dynamics related to leukemia, Symmetry, 13 (2021), 1867. https://doi.org/10.3390/sym13101867 doi: 10.3390/sym13101867
[3] R. Precup, Fixed point theorems for decomposable multivalued maps and applications, Zeitschrift fu¨r Analysis und ihre Anwendungen, 22 (2003), 843–861. https://doi.org/10.4171/ZAA/1176
[4] R. Precup, On some applications of the controllability principle for fixed point equations, Results in Applied Mathematics, 13 (2022), 100236. https://doi.org/10.1016/j.rinam.2021.100236 doi: 10.1016/j.rinam.2021.100236
[5] M. E. M. Meza, A. Bhaya, E. Kaszkurewicz, Controller design techniques for the LotkaVolterra nonlinear system, Sba: Controle and Automaça~, 16 (2005), 124–135. https://doi.org/10.1590/S010317592005000200002
[6] K. Balachandran, J. P. Dauer, Controllability of nonlinear systems via fixedpoint theorems, J. Optimiz. Theory Appl., 53 (1987), 345–352. https://doi.org/10.1007/BF00938943 doi: 10.1007/BF00938943
[7] N. Carmichael, M. D. Quinn, Fixed point methods in nonlinear control, In: F. Kappel, K. Kunisch, W. Schappacher (eds) Distributed parameter systems, Lecture Notes in Control and Information Sciences, vol 75, Berlin: Springer, 1985.
[8] L. Górniewicz, S. K. Ntouyas, D. O’Regan, Controllability of semilinear differential equations and inclusions via semigroup theory in Banach spaces, Reports on Mathematical Physics, 56 (2005), 437–470. https://doi.org/10.1016/S00344877(05)800965 doi: 10.1016/S00344877(05)800965
[9] J. Klamka, Schauder’s fixedpoint theorem in nonlinear controllability problems, Control Cybern., 29 (2000), 153–165.
[10] J. Klamka, A. Babiarz, M. Niezabitowski, Banach fixedpoint theorem in semilinear controllability problems – a survey, B. Pol. Acad. Sci.Tech., 64 (2016), 21–35. https://doi.org/10.1515/bpasts20160004 doi: 10.1515/bpasts20160004
[11] H. Leiva, Rothe’s fixed point theorem and controllability of semilinear nonautonomous systems, Syst. Control Lett., 67 (2014), 14–18. https://doi.org/10.1016/j.sysconle.2014.01.008 doi: 10.1016/j.sysconle.2014.01.008
[12] J.M. Coron, Control and nonlinearity, Mathematical Surveys and Monographs Vol. 136, Providence: Amer. Math. Soc., 2007.
[13] A. N. Kolmogorov, Sulla teoria di Volterra della lotta per l’esistenza, Giornale dell’Istituto Italiano degli Attuari, 7 (1936), 74–80.
[14] K. Sigmund, Kolmogorov and population dynamics, In: É. Charpentier, A. Lesne, N. K. Nikolski (eds) Kolmogorov’s heritage in mathematics, Berlin: Springer, 2007.
[15] A. Granas, J. Dugundji, Fixed point theory, New York: Springer, 2003.
[16] R. Precup, Methods in nonlinear integral equations, Dordrecht: Springer Science+Business Media, 2002.
[17] B. Neiman, A mathematical model of chronic myelogenous leukemia, Oxford: Oxford University, 2000.