A Mutual Control Problem for Semilinear Systems via Fixed Point Approach

Abstract

In this paper, we introduce and discuss the concept of a mutual control problem. Our analysis relies on a vector fixed-point approach based on the fixed-point theorems of Perov, Schauder, Leray-Schauder, and Avramescu. Additionally, for a related semi-observability problem, we employ a novel technique utilizing Bielecki equivalent norms.

Authors

Radu Precup
Faculty of Mathematics and Computer Science and Institute of Advanced Studies in Science and Technology, Babes-Bolyai University, Cluj-Napoca, Romania
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy, Cluj-Napoca, Romania

Andrei Stan
Department of Mathematics, Babes-Bolyai University, Cluj-Napoca, Romania
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy, Cluj-Napoca, Romania

Keywords

control; fixed point; nonlinear operator; differential system

Paper coordinates

R. Precup, A. Stan, A mutual control problem for semilinear system via fixed point approach, J. Nonlinear Convex Anal.,  26 (2025), no. 5, 1081-1094.

PDF

About this paper

Print ISSN
Online ISSN

google scholar link

[1] M. Beldinski and M. Galewski, Nash type equilibria for systems of non-potential equations, Appl. Math. Comput. 385 (2020), 125456.
[2] A. Berman and R. J. Plemmons, Nonnegative matrices in the mathematical sciences, Academic Press, 1979.
[3] M. Coron, Control and nonlinearity, AMS, Providence, 2007.
[4] K. Deimling, Nonlinear functional analysis, Springer, Berlin, 1985.
[5] A. Granas, Fixed point theory, Springer, New York, 2003.
[6] P. Magal and S. Ruan, Theory and applications of abstract semilinear Cauchy problems, Springer, Berlin, 2018.
[7] I. Mezo, The Lambert W function: Its generalizations and applications, Chapman and Hall/CRC.
[8] S. Park, Generalizations of the Nash equilibrium theorem in the KKM theory, Fixed Point Theor. Appl. 2010 (2010), 234706.
[9] R. Precup, The role of matrices that are convergent to zero in the study of semilinear operator systems, Math. Comput. Model. 49 (2009), 703–708.
[10] R. Precup, Nash-type equilibria and periodic solutions to nonvariational systems, Adv. Nonlinear Anal. 3 (2014), no. 4, 197–207.
[11] R. Precup, A critical point theorem in bounded convex sets and localization of Nash-type equilibria of nonvariational systems, J. Math. Anal. Appl. 463 (2018), 412-431.
[12] R. Precup, On some applications of the controllability principle for fixed point equations, Results Appl. Math. 13 (2022), 100236.
[13] R. Precup, Methods in nonlinear integral equations, Springer, Dordrecht, 2002.
[14] R. Precup and A. Stan, Stationary Kirchhoff equations and systems with reaction terms, AIMS Mathematics 7 (2022), Issue 8, 15258–15281.
[15] R. Precup and A. Stan, Linking methods for componentwise variational systems, Results Math. 78 (2023), 1–25.
[16] A. Stan, Nonlinear systems with a partial Nash type equilibrium, Studia Univ. Babe¸s-Bolyai Math. 66 (2021), 397–408.
[17] A. Stan, Nash equilibria for componentwise variational systems, J. Nonlinear Funct. Anal. 6 (2023), 1–10.
[18] A. Stan, Localization of Nash-type equilibria for systems with partial variational structure, J. Numer. Anal. Approx. Theory 52 (2023), 253–272.
[19] I. I. Vrabie, C0-Semigroups and applications, Elsevier, Amsterdam, 2003.
[20] J. Zabczyk, Mathematical control theory, Springer, Cham, 2020.

Paper (preprint) in HTML form

A mutual control problem for semilinear systems via fixed point approach

Radu Precup R. Precup, Faculty of Mathematics and Computer Science and Institute of Advanced Studies in Science and Technology, Babeş-Bolyai University, 400084 Cluj-Napoca, Romania & Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy, P.O. Box 68-1, 400110 Cluj-Napoca, Romania r.precup@ictp.acad.ro and Andrei Stan A. Stan, Department of Mathematics, Babeş-Bolyai University, 400084 Cluj-Napoca, Romania & Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy, P.O. Box 68-1, 400110 Cluj-Napoca, Romania andrei.stan@ubbcluj.ro
Abstract.

In this paper, we introduce and discuss the concept of a mutual control problem. Our analysis relies on a vector fixed-point approach based on the fixed-point theorems of Perov, Schauder, and Avramescu. In our analysis we employ a novel technique utilizing Bielecki equivalent norms.

Key words and phrases: control; fixed point; nonlinear operator; differential system

Mathematics Subject Classification (2010): 34B15, 34K35, 47H10

1. Introduction and preliminaries

The study of systems of abstract or concrete equations has been the subject of research for a long time, especially from the perspective of the existence and uniqueness of solutions. In the present paper, our objective goes beyond merely establishing the existence of solutions; we strive to identify solutions whose components maintain a level of control over each other.

Let D1,D2,CD_{1},D_{2},C be sets with CD1×D2C\subset D_{1}\times D_{2} and let E:D1×D2ZE:D_{1}\times D_{2}\rightarrow Z be a mapping, where ZZ is a linear space. Consider the equation E(x,λ)=0ZE(x,\lambda)=0_{Z} and the problem of finding λD2\lambda\in D_{2} such that the equation has a solution xD1x\in D_{1} with (x,λ)C(x,\lambda)\in C. We say that xx is the state variable, λ\lambda is the control variable, and CC is the controllability domain. One way to solve the problem (see, e.g, [12]) is to use the controllability condition to express λ\lambda as a function of xx, λ=N(x)\lambda=N(x), and then find a solution to the equation E(x,N(x))=0ZE(x,N(x))=0_{Z}. Alternatively, we can express xx as x=N(λ)x=N(\lambda) and then solve the equation E(N(λ),λ)=0ZE(N(\lambda),\lambda)=0_{Z}. In the first case, we solve the problem

{λ=N(x)E(x,N(x))=0Z,\left\{\begin{array}[]{l}\lambda=N\left(x\right)\\[5.0pt] E\left(x,N\left(x\right)\right)=0_{Z},\end{array}\right.

while in the second case, the problem

{x=N(λ)E(N(λ),λ)=0Z.\left\{\begin{array}[]{l}x=N\left(\lambda\right)\\[5.0pt] E\left(N\left(\lambda\right),\lambda\right)=0_{Z}.\end{array}\right.

For a system of two equations, we view each variable as a control over the other, and we aim to find solutions that achieve equilibrium under certain controllability conditions. We call this a mutual control problem. Specifically, we consider four sets D1,D2,C1,C2D_{1},D_{2},C_{1},C_{2} with C1D1×D2C_{1}\subset D_{1}\times D_{2} and C2D2×D1C_{2}\subset D_{2}\times D_{1}, E1,E2:D1×D2ZE_{1},E_{2}:D_{1}\times D_{2}\rightarrow Z two mappings and the problem

{E1(x,y)=0ZE2(x,y)=0Z(x,y)C1, (y,x)C2.\begin{cases}E_{1}(x,y)=0_{Z}\\[5.0pt] E_{2}(x,y)=0_{Z}\\[5.0pt] (x,y)\in C_{1},\text{ }(y,x)\in C_{2}.\end{cases}

The interpretation of this problem is as follows: the first equation describes the state xx controlled by yy, and the second equation describes the state yy controlled by xx. The restriction of (x,y)(x,y) to C1C_{1} and (y,x)(y,x) to C2C_{2} represents the controllability conditions. One way to solve such a problem, is to incorporate the controllability conditions into the equations and give the problem a fixed point formulation

(x,y)(N1(x,y),N2(x,y)),\left(x,y\right)\in\left(N_{1}(x,y),\ N_{2}(x,y)\right),

where N1,N2N_{1},N_{2} are set-valued mappings N1:D1×D2D1,N_{1}:D_{1}\times D_{2}\rightarrow D_{1}, N2:D1×D2D2.N_{2}:D_{1}\times D_{2}\rightarrow D_{2}. A solution (x,y)\left(x,y\right) of this fixed point equation is said to be a solution of the mutual control problem, while the problem is said to be mutually controllable if such a solution exists.

In some cases, we can do even more, namely to express xx and yy as state variables in terms of the controls yy and xx, respectively. In such situations, one has xN1(y)x\in N_{1}(y)\ and yN2(x)y\in N_{2}(x), where

N1:D2D1,N1(y):={x: E1(x,y)=0Z and (x,y)C1},\displaystyle N_{1}:D_{2}\rightarrow D_{1},\ N_{1}(y):=\left\{x:\text{ }E_{1}(x,y)=0_{{}_{Z}}\text{ and }(x,y)\in C_{1}\right\},
N2:D1D2,N2(x):={y: E2(x,y)=0Z and (y,x)C2}.\displaystyle N_{2}:D_{1}\rightarrow D_{2},\ N_{2}(x):=\left\{y:\text{ }E_{2}(x,y)=0_{{}_{Z}}\text{ and }(y,x)\in C_{2}\right\}.

Note that N1N_{1} indicates how the state xx is controlled by yy, and N2N_{2} shows how the state yy is controlled by xx. The equilibrium is achieved if there exists (x,y)D1×D2(x,y)\in D_{1}\times D_{2} such that

(x,y)(N1(y),N2(x)).(x,y)\in\left(N_{1}(y),N_{2}(x)\right).

It is interesting to note the similarity between the mutual control problem and the Nash equilibrium problem. In case of the last one, we have two functionals J1(x,y)J_{1}\left(x,y\right) andJ2(x,y).\ J_{2}\left(x,y\right). Minimizing J1(.,y)J_{1}\left(.,y\right) on D1D_{1} for each yD2y\in D_{2} yields to the set of minimum points denoted s1(y),s_{1}\left(y\right), while minimizing J2(x,.)J_{2}\left(x,.\right) on D2D_{2}\ for each xD1x\in D_{1} yields to the set of minimum points denoted s2(x).s_{2}\left(x\right). A point (x,y)\left(x,y\right) is a Nash equilibrium with respect the two functionals if

(x,y)(s1(y),s2(x)).\left(x,y\right)\in\left(s_{1}\left(y\right),\ s_{2}\left(x\right)\right).

In situations when the two functionals are differentiable, a Nash equilibrium (x,y)\left(x,y\right) solves the system

{J11(x,y)=0J22(x,y)=0,\left\{\begin{array}[]{l}J_{11}\left(x,y\right)=0\\ J_{22}\left(x,y\right)=0,\end{array}\right.\

where J11,J22J_{11},J_{22} are the derivatives of J1,J2J_{1},J_{2} in the first and second variable, respectively. These equations replace the ones associated to E1,E2,E_{1},E_{2}, while the controllability conditions are such that x,yx,y minimize J1,J2J_{1},J_{2} in the first and second variable, respectively, that is

C1\displaystyle C_{1} =\displaystyle= {(x,y)D1×D2:x minimizes J1(.,y)},\displaystyle\left\{\left(x,y\right)\in D_{1}\times D_{2}:\ x\text{ minimizes }J_{1}\left(.,y\right)\right\},
C2\displaystyle C_{2} =\displaystyle= {(y,x)D2×D1:y minimizes J2(x,.)}.\displaystyle\left\{\left(y,x\right)\in D_{2}\times D_{1}:\ y\text{ minimizes }J_{2}\left(x,.\right)\right\}.

Therefore, in the differential case, the Nash equilibrium problem appears as a particular case of the mutual control problem as specified above. For further details on such results, we refer the reader to [1, 8, 10, 11, 14, 15, 16, 17, 18].

Compared to classical control theory, where the problems involve explicit control variables (see, e.g., [3, 20]), mutual control problems feature controls that are implicit by the very form of the coupling terms. In this way, the problem of mutual control appears to be more general.

In this paper, we discuss the solvability of a mutual control problem for a system of semilinear first-order differential equations, namely

(1.1) {x+Ax=f(x,y)y+By=g(x,y), on [0,T].\left\{\begin{array}[]{l}x^{\prime}+Ax=f\left(x,y\right)\\[5.0pt] y^{\prime}+By=g\left(x,y\right),\text{ on }\left[0,T\right].\end{array}\right.

Here, T>0T>0; x,yn;x,y\in\mathbb{R}^{n}; A,BMn×n()A,B\in M_{n\times n}\left(\mathbb{R}\right) and f,g:n×nnf,g:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}^{n} are continuous mappings. Each of the two states, xx and yy, controls the other with the aim of achieving the equilibrium specified by the controllability condition

(1.2) x(T)=ky(T),x(T)=ky(T),

for some given k>0k>0. Thus,

C1\displaystyle C_{1} =\displaystyle= {(x,y):x(T)=ky(T)},\displaystyle\left\{(x,y)\,:\,x(T)=ky(T)\right\},
C2\displaystyle C_{2} =\displaystyle= {(y,x):y(T)=1kx(T)}.\displaystyle\left\{(y,x)\,:\,y(T)=\frac{1}{k}x(T)\right\}.

Related to our system (1.1)\left(\ref{se}\right), we assume that only one initial state is know, say y(0).y(0). Then, based on this and the controllability condition (1.2),(\ref{cc0}), we determine both the solution and the other initial state x(0).x\left(0\right). Note that, in this case, we are dealing a semi-observability problem since, given a control and some partial initial information, we must determine both the output and the necessary initial condition x(0)x\left(0\right) that leads to this output satisfying the control. We send to [20] for more details on the observability problems.

Such a control condition is of interest in dynamics of populations when it expresses the requirement that at a certain moment TT the ratio between two populations, for example prey and predators, should be the desired k.k. Similarly, for the control of epidemics, it expresses the requirement that at some time one reach a certain ratio between the infected population and that susceptible to infection. Analogous interpretations can be given in the case of some chemical or medical reaction models.

Our analysis relies on the basic fixed point theorems of Perov, Schauder, and Avramescu. We present both the advantages and limitations of each theorem. Using a vector approach based on matrices instead of constants allows us to obtain results independent of the norm of the spaces we are working in.

1.1. Bielecki-type norms

For each number θ0\theta\geq 0, on the space C([0,T];n)C([0,T];\mathbb{R}^{n}), we define the Bielecki norm

uθ=maxt[0,T]eθt|u(t)|.\left|\left|u\right|\right|_{\theta}=\max_{t\in\left[0,T\right]\ }e^{-\theta t}\left|u\left(t\right)\right|.

Note that if θ=0\theta=0, one has 0=,\|\cdot\|_{0}=\|\cdot\|, where \|\cdot\| is the uniform norm. We mention that, with a suitable choice of θ\theta, the Bielecki-type norms allow us to relax the restrictions on constants required by the Lipschitz or growth conditions when using fixed point theorems.

1.2. Matrices convergent to zero

A square matrix MMn×n(+)M\in M_{n\times n}\left(\mathbb{R}_{+}\right) is said to be convergent to zero if its power MkM^{k} tends to the zero matrix as kk\rightarrow\infty. The next lemma provides equivalent conditions for a square matrix to be convergent to zero (see, e.g., [2, 9, 13]).

Lemma 1.1.

Let AMn×n(+)A\in M_{n\times n}\left(\mathbb{R}_{+}\right) be a square matrix. The following statements are equivalent:

(a):

The matrix AA is convergent to zero.

(b):

The spectral radius of AA is less than 11, i.e., ρ(A)<1\rho(A)<1.

(c):

The matrix IA,I-A, where II is the unit matrix of the same size, is invertible and its inverse has nonnegative entries, i.e., (IA)1Mn×n(+).\left(I-A\right)^{-1}\in M_{n\times n}\left(\mathbb{R}_{+}\right).

(d):

In case n=2n=2, a matrix A=[aij]1i,j2M2×2(+)A=[a_{ij}]_{1\leq i,j\leq 2}\in M_{2\times 2}\left(\mathbb{R}_{+}\right) is convergent to zero if and only if

a11,a22<1and a11+a22<1+a11a22a12a21.a_{11},\ a_{22}<1\ \ \text{and\ \ }a_{11}+a_{22}<1+a_{11}a_{22}-a_{12}a_{21}.

In what follows, the vectors in n\mathbb{R}^{n} are treated as columns. By |x|\left|x\right|, we mean the Euclidean norm of the vector xx; x,y\left\langle x,y\right\rangle represents the inner product in n\mathbb{R}^{n}; and x\left\|x\right\| stands for the uniform norm in C([0,T];n)C\left([0,T];\mathbb{R}^{n}\right). For a square matrix AA, we denote SA(t)=etAS_{A}(t)=e^{-tA} (tt\in\mathbb{R}). Additionally, let CAC_{A} be a upper bound of the norms of the linear operators etAe^{-tA} on [T,T][-T,T], i.e.,

maxt[T,T]|etA|CA.\max_{t\in[-T,T]}\left|e^{-tA}\right|\leq C_{A}.

We conclude this introduction by recalling two less known fixed point theorems which are applied in order to establish our results.

Theorem 1.2 (Perov).

Let (Xi,di),\left(X_{i},d_{i}\right), i=1,2i=1,2 be complete metric spaces and Ni:X1×X2XiN_{i}:X_{1}\times X_{2}\rightarrow X_{i} be two mappings for which there exists a square matrix MM of size two with nonnegative entries and the spectral radius ρ(M)<1\rho\left(M\right)<1 such that the following vector inequality

(d1(N1(x,y),N1(u,v))d2(N2(x,y),N2(u,v)))M(d1(x,u)d2(y,v))\left(\begin{array}[]{c}d_{1}\left(N_{1}\left(x,y\right),N_{1}\left(u,v\right)\right)\\[5.0pt] d_{2}\left(N_{2}\left(x,y\right),N_{2}\left(u,v\right)\right)\end{array}\right)\leq M\left(\begin{array}[]{c}d_{1}\left(x,u\right)\\[5.0pt] d_{2}\left(y,v\right)\end{array}\right)

holds for all (x,y),(u,v)X1×X2.\left(x,y\right),\left(u,v\right)\in X_{1}\times X_{2}. Then, there exists a unique point (x,y)X1×X2\left(x,y\right)\in X_{1}\times X_{2} with

(x,y)=(N1(x,y),N2(x,y)).(x,y)=\left(N_{1}\left(x,y\right),\ N_{2}\left(x,y\right)\right).
Theorem 1.3 (Avramescu).

Let D1D_{1} be a closed convex subset of a normed spaceY\ Y, (D2,d)(D_{2},d) a complete metric space, and Ni:D1×D2DiN_{i}:D_{1}\times D_{2}\rightarrow D_{i}, i=1,2i=1,2 be continuous mappings. Assume that the following conditions are satisfied:

  1. (i)

    N1(D1×D2)N_{1}(D_{1}\times D_{2}) is a relatively compact subset of YY ;

  2. (ii)

    There is a constant L[0,1)L\in[0,1) such that

    d(N2(x,y),N2(x,y¯))Ld(y,y¯),d(N_{2}(x,y),N_{2}(x,\bar{y}))\leq L\,d(y,\overline{y}),

    for all y,y¯D2y,\overline{y}\in D_{2};

Then, there exists (x,y)D1×D2(x,y)\in D_{1}\times D_{2} such that

N1(x,y)=x,N2(x,y)=y.N_{1}(x,y)=x,\quad N_{2}(x,y)=y.

Some reference works in fixed point theory are the books [4] and [5].

2. Controllability of the mutual control problem

In the first part of this section, we present some properties of a particular type of matrix that will be used throughout this paper.

2.1. On a type of matrices convergent to zero

In the subsequent analysis based on the fixed point arguments, it is advantageous to use the matrix

M(θ)=[a11a12eθT1θa21a221eθTθ],M(\theta)=\begin{bmatrix}a_{11}&a_{12}\frac{e^{\theta T}-1}{\theta}\\[3.0pt] a_{21}&a_{22}\frac{1-e^{-\theta T}}{\theta}\end{bmatrix},

where θ0,\theta\geq 0, and we need to find θ\theta such that M(θ)M(\theta) is convergent to zero. Here, aij(i,j=1,2)a_{ij}\,(i,j=1,2) are nonnegative numbers with

a11<1and a22<1T.a_{11}<1\ \ \ \text{and\ \ \ }a_{22}<\frac{1}{T}.

Notice that the last inequality guarantees a221eθTθ<1a_{22}\frac{1-e^{-\theta T}}{\theta}<1 since 1eθTθT\frac{1-e^{-\theta T}}{\theta}\leq T for every θ0.\theta\geq 0.

From Lemma 1.1, the matrix M(θ)M(\theta) is convergent to zero if and only if h(θ)<0h(\theta)<0, where

h(θ)\displaystyle h(\theta) =tr(M(θ))1det(M(θ))\displaystyle=\text{tr}(M(\theta))-1-\text{det}(M(\theta))
=a11+a221eθTθ1a11a221eθTθ+a12a21eθT1θ(θ0).\displaystyle=a_{11}+a_{22}\frac{1-e^{-\theta T}}{\theta}-1-a_{11}a_{22}\frac{1-e^{-\theta T}}{\theta}+a_{12}a_{21}\frac{e^{\theta T}-1}{\theta}\ \ \left(\theta\geq 0\right).

Denoting τ=a22(1a11)a12a21,\tau=a_{22}(1-a_{11})-a_{12}a_{21}, one has

h(θ)=1θ2(a22eθT(1a11)(1+θT)+a12a21(θT1)eθTτ),\displaystyle h^{\prime}(\theta)=\frac{1}{\theta^{2}}\left(a_{22}e^{-\theta T}(1-a_{11})(1+\theta T)+a_{12}a_{21}(\theta T-1)e^{\theta T}-\tau\right),
h(0)=(1a11)(1a22T)+a12a21T,\displaystyle h(0)=-\left(1-a_{11}\right)\left(1-a_{22}T\right)+a_{12}a_{21}T,
h(0)=a22(1a11)(T22+1)<0, and\displaystyle h^{\prime}\left(0\right)=-a_{22}(1-a_{11})\left(\frac{T^{2}}{2}+1\right)<0,\text{ and }
limθh(θ)={a111if a12a21=0+if a12a21>0.\displaystyle\lim_{\theta\rightarrow\infty}h\left(\theta\right)=\left\{\begin{array}[]{l}a_{11}-1\ \ \text{if }a_{12}a_{21}=0\\[5.0pt] +\infty\ \ \text{if }a_{12}a_{21}>0.\end{array}\right.

In the following lemma, we discuss conditions to guarantee the existence of a θ\theta such that M(θ)M(\theta) is convergent to zero.

Lemma 2.1.

Assume 0a11<10\leq a_{11}<1 and 0<a22<1T0<a_{22}<\frac{1}{T}.

  • (i)

    If h(0)<0h(0)<0, then M(0)M(0) converges to zero.

  • (ii)

    If h(0)0h(0)\geq 0, then there exists θ1>0\theta_{1}>0 with h(θ1)=0h^{\prime}(\theta_{1})=0 and

    • (a)

      if h(θ1)<0h(\theta_{1})<0, then the matrix M(θ)M(\theta) converges to zero for every θ\theta between the zeroes of hh and does not converge to zero otherwise;

    • (b)

      if h(θ1)0h(\theta_{1})\geq 0, then there are no θ\theta such that M(θ)M(\theta) converges to zero.

Proof.

(i) is obvious. The next assertions are based on the convexity of the function h.h. To prove this, we show its second derivative is positive everywhere on (0,)(0,\infty). Indeed, one has

h′′(θ)=1θ3(2(a22a22a11a12a21)+a12a21φ(θ)a22(1a11)φ(θ)),h^{\prime\prime}(\theta)=\frac{1}{\theta^{3}}\left(2\left(a_{22}-a_{22}a_{11}-a_{12}a_{21}\right)+a_{12}a_{21}\varphi(\theta)-a_{22}\left(1-a_{11}\right)\varphi(-\theta)\right),

where

φ(θ)=eθT((θT1)2+1).\varphi(\theta)=e^{\theta T}\left(\left(\theta T-1\right)^{2}+1\right).

Since

φ(θ)2 and φ(θ)2, for all θ0, \varphi(\theta)\geq 2\,\text{ \ and }\ \varphi(-\theta)\leq 2,\text{ for all $\theta\geq 0,$ }

we find

θ3h′′(θ)2(a22a22a11a12a21)+2a12a212a22(1a11)=0,\theta^{3}h^{\prime\prime}(\theta)\geq 2\left(a_{22}-a_{22}a_{11}-a_{12}a_{21}\right)+2a_{12}a_{21}-2a_{22}(1-a_{11})=0,

which gives our conclusion.

(ii) Assume that h(0)0h(0)\geq 0. Since h(0)<0h^{\prime}(0)<0 and limθh(θ)=+\lim_{\theta\rightarrow\infty}h(\theta)=+\infty, the function hh has a minimum at some θ1(0,+)\theta_{1}\in(0,+\infty), whence h(θ1)=0h^{\prime}(\theta_{1})=0. Clearly, if h(θ1)<0h(\theta_{1})<0, then hh has two positive zeros and is negative between them and nonnegative otherwise. If h(θ1)0h(\theta_{1})\geq 0, then h(θ)0h(\theta)\geq 0 for all θ0\theta\geq 0, and thus there are no θ\theta such that M(θ)M(\theta) converges to zero. ∎

Remark 1.

In case a22=0,a_{22}=0, M(θ)M\left(\theta\right) is convergent to zero if and only if

a11<1 and a12a21eθT1θ<1a11.a_{11}<1\text{ \ \ and\ \ \ }a_{12}a_{21}\frac{e^{\theta T}-1}{\theta}<1-a_{11}.

Clearly, solving the equation h(θ)=0h^{\prime}(\theta)=0 analytically is challenging. Thus, instead of M(θ)M(\theta), we may consider an approximate variant, larger than M(θ),M\left(\theta\right), namely

M~(θ)=[a11a12eθT1θa21a221θ].\widetilde{M}(\theta)=\begin{bmatrix}a_{11}&a_{12}\frac{e^{\theta T}-1}{\theta}\\[5.0pt] a_{21}&a_{22}\frac{1}{\theta}\end{bmatrix}.

Since M(θ)M~(θ)M\left(\theta\right)\leq\widetilde{M}(\theta) componentwise, if M~(θ)\widetilde{M}(\theta) is convergent to zero, then M(θ)M\left(\theta\right) is so too. Following similar reasoning as above, the matrix M~(θ)\widetilde{M}(\theta) converges to zero if and only if h~(θ)<0\ \widetilde{h}(\theta)<0, where

h~(θ)=a11+a22θ1a11a22θ+a12a21eθT1θ(θ>0).\widetilde{h}(\theta)=a_{11}+\frac{a_{22}}{\theta}-1-\frac{a_{11}a_{22}}{\theta}+a_{12}a_{21}\frac{e^{\theta T}-1}{\theta}\ \ \left(\theta>0\right).

Note that h~\widetilde{h} cannot be extended at zero by continuity and in general is not convex. However, under suitable conditions on aij,a_{ij}, a similar result to the previous lemma holds true. In the following, W:++W:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+} represents the Lambert function restricted to +\mathbb{R}_{+}, i.e., the inverse of the function zez(z+).ze^{z}\ \left(z\in\mathbb{R}_{+}\right). For details we send to [7].

Lemma 2.2.

Assume that a11<1a_{11}<1.

(i):

If

(2.1) τ:=a22(1a11)a12a21>0\tau:=a_{22}(1-a_{11})-a_{12}a_{21}>0

then the function h~\tilde{h} is strictly convex on (0,)(0,\infty).

(ii):

If

a12a21Teθ1T<1a11,a_{12}a_{21}Te^{\theta_{1}T}<1-a_{11},

where θ1=1T(W(τea12a21)+1)\theta_{1}=\frac{1}{T}\left(W\left(\frac{\tau}{e\,a_{12}a_{21}}\right)+1\right), then h~(θ1)<0\widetilde{h}(\theta_{1})<0 and there is a vicinity V=(σ1,σ2)V=\left(\sigma_{1},\sigma_{2}\right) of θ1\theta_{1} with 0<σ1<σ2<+0<\sigma_{1}<\sigma_{2}<+\infty such that matrix M~(θ)\widetilde{M}(\theta) is convergent to zero for every θV\theta\in V and is not so for θV.\theta\notin V.

Proof.

(i) Simple computations yield

h~(θ)=1θ2(a12a21eθT(θT1)τ).\widetilde{h}^{\prime}(\theta)=\frac{1}{\theta^{2}}\left(a_{12}a_{21}e^{\theta T}\left(\theta T-1\right)-\tau\right).

Differentiating again h(θ)h^{\prime}(\theta) with respect to θ(0,)\theta\in(0,\infty), and using (2.1), we deduce

h~′′(θ)\displaystyle\tilde{h}^{\prime\prime}(\theta) =1θ3(a12a21eθTθ2T22(τ+a12a21eθTθTa12a21eθT))\displaystyle=\frac{1}{\theta^{3}}\left(a_{12}a_{21}e^{\theta T}\theta^{2}T^{2}-2\left(-\tau+a_{12}a_{21}e^{\theta T}\theta T-a_{12}a_{21}e^{\theta T}\right)\right)
=a12a21θ3eθT(θ2T22θT+2)+2τθ3(a11a22a12a21+a22)\displaystyle=\frac{a_{12}a_{21}}{\theta^{3}}e^{\theta T}\left(\theta^{2}T^{2}-2\theta T+2\right)+\frac{2\tau}{\theta^{3}}\left(-a_{11}a_{22}-a_{12}a_{21}+a_{22}\right)
=a12a21θ3eθT(1+(θT1)2)+2τθ3>0.\displaystyle=\frac{a_{12}a_{21}}{\theta^{3}}e^{\theta T}\left(1+(\theta T-1)^{2}\right)+\frac{2\tau}{\theta^{3}}>0.

(ii) Note that from a22(1a11)>0a_{22}(1-a_{11})>0, we find limθ0h~(θ)=+\lim_{\theta\rightarrow 0}\widetilde{h}(\theta)=+\infty, while from a12a21>0a_{12}a_{21}>0, we have limθh~(θ)=+\lim_{\theta\rightarrow\infty}\widetilde{h}(\theta)=+\infty. Therefore, h~\widetilde{h} has a minimum θ1(0,)\theta_{1}\in(0,\infty) . Since h~\widetilde{h} is convex, we have h~(θ1)=0\widetilde{h}^{\prime}(\theta_{1})=0, which leads to

a12a21eθ1T(θ1T1)=τ.a_{12}a_{21}e^{\theta_{1}T}(\theta_{1}T-1)=\tau.

Letting z:=θ1T1z:=\theta_{1}T-1, we obtain

zez=τea12a21,ze^{z}=\frac{\tau}{e\,a_{12}a_{21}},

thus z=W(τea12a21)z=W\left(\frac{\tau}{e\,a_{12}a_{21}}\right). Consequently,

θ1=1T(W(τea12a21)+1).\theta_{1}=\frac{1}{T}\left(W\left(\frac{\tau}{e\,a_{12}a_{21}}\right)+1\right).

Next, evaluating h~\widetilde{h} at θ1\theta_{1}, we find

h~(θ1)=a111+a12a21Teθ1T.\widetilde{h}(\theta_{1})=a_{11}-1+a_{12}a_{21}Te^{\theta_{1}T}.

Therefore, under our assumption, h~(θ1)<0\widetilde{h}(\theta_{1})<0. The conclusion follows from the convexity of h~\widetilde{h} and its limits as θ\theta approaches 0 and infinity. ∎

Remark 2.

The advantage of using M~(θ)\widetilde{M}(\theta) instead of M(θ)M(\theta) is that we have an analytical expression of the point where the minimum of h~\tilde{h} is attained. However, in applications, due to numerical computer power, it may be more advantageous to find an approximate solution of the equation h(θ)=0h^{\prime}(\theta)=0, and evaluate hh at that point.

Since M~(θ)\widetilde{M}(\theta) is an approximation of M(θ)M(\theta), there are cases where M~(θ)\widetilde{M}\left(\theta\right) does not converge to zero for any θ>0\theta>0, however, there exists a θ0>0\theta_{0}>0 such that M(θ0)M(\theta_{0}) does. The example below illustrates this situation.

Example 1.

Let

a11=0.3,a12=0.62,a21=0.45,a22=0.63, and T=0.98.a_{11}=0.3,\,a_{12}=0.62,\,a_{21}=0.45,\,a_{22}=0.63,\text{ and }T=0.98.

Simple computations show that τ=0.162>0\tau=0.162>0 and θ11.1931\theta_{1}\approx 1.1931 , but conditions (ii) is not satisfied, since

0.88a12a21Teθ1T>1a11=0.7.0.88\approx a_{12}a_{21}Te^{\theta_{1}T}>1-a_{11}=0.7.

On the other hand, solving numerically h(θ)=0h^{\prime}(\theta)=0 gives an approximate solution θ00.3505\theta_{0}\approx 0.3505, and hence h(θ0)0.00798<0,h(\theta_{0})\approx-0.00798<0, i.e., M(θ0)M\left(\theta_{0}\right) is convergent to zero. Additionally, note that h(0)=0.0056>0h(0)=0.0056>0, i.e., M(0)M(0) is not convergent to zero.

2.2. Fixed point formulation of the mutual control problem

Throughout this section, let βn\beta\in\mathbb{R}^{n}\ be an arbitrarily given value, and define

Cβ([0,T];n):={yC([0,T];n):y(0)=β}.C_{\beta}([0,T];\mathbb{R}^{n}):=\left\{y\in C([0,T];\mathbb{R}^{n})\;:\;y(0)=\beta\right\}.

Additionally, we denote

Xβ=C([0,T];n)×Cβ([0,T];n),X_{\beta}=C([0,T];\mathbb{R}^{n})\times C_{\beta}([0,T];\mathbb{R}^{n}),

which is a complete metric space with respect to the metric inherited from C([0,T];n)×C([0,T];n)C([0,T];\mathbb{R}^{n})\times C([0,T];\mathbb{R}^{n}). Under the initial condition y(0)=β,y\left(0\right)=\beta, the system (1.1) is equivalent to

(2.2) {x(t)=SA(t)x(0)+0tSA(ts)f(x(s),y(s))𝑑sy(t)=SB(t)β+0tSB(ts)g(x(s),y(s))𝑑s,\begin{cases}x\left(t\right)=S_{A}\left(t\right)x(0)+\int_{0}^{t}S_{A}\left(t-s\right)f\left(x(s),y(s)\right)ds\\[5.0pt] y\left(t\right)=S_{B}\left(t\right)\beta+\int_{0}^{t}S_{B}\left(t-s\right)g\left(x(s),y(s)\right)ds,\end{cases}

while the controllability condition (1.2) becomes

(2.3) x(0)\displaystyle x(0) =kSA(T)SB(T)β+k0TSA(T)SB(Ts)g(x(s),y(s))𝑑s\displaystyle=kS_{A}(-T)S_{B}\left(T\right)\beta+k\int_{0}^{T}S_{A}(-T)S_{B}\left(T-s\right)g\left(x(s),y(s)\right)ds
0TSA(s)f(x(s),y(s))𝑑s.\displaystyle\quad-\int_{0}^{T}S_{A}\left(-s\right)f\left(x\left(s\right),y\left(s\right)\right)ds.

Thus, substituting relation (2.3) in (2.2), for every t[0,T]t\in[0,T], we have

(2.4) {x(t)=N1(x,y)(t)y(t)=N2(x,y)(t),\begin{cases}x\left(t\right)=N_{1}(x,y)(t)\\ y\left(t\right)=N_{2}(x,y)(t),\end{cases}

where (N1,N2):XβXβ\left(N_{1},N_{2}\right)\colon X_{\beta}\rightarrow X_{\beta},

N1(x,y)(t)\displaystyle N_{1}(x,y)(t) =kSA(tT)SB(T)β+k0TSA(tT)SB(Ts)g(x(s),y(s))𝑑s\displaystyle=kS_{A}(t-T)S_{B}\left(T\right)\beta+k\int_{0}^{T}S_{A}(t-T)S_{B}\left(T-s\right)g\left(x(s),y(s)\right)ds
tTSA(ts)f(x(s),y(s))𝑑s,\displaystyle\quad-\int_{t}^{T}S_{A}\left(t-s\right)f\left(x(s),y(s)\right)ds,

and

N2(x,y)(t)=SB(t)β+0tSB(ts)g(x(s),y(s))𝑑s.N_{2}(x,y)(t)=S_{B}\left(t\right)\beta+\int_{0}^{t}S_{B}\left(t-s\right)g\left(x(s),y(s)\right)ds.

Clearly, the operator (N1,N2)(N_{1},N_{2}) is well defined from XβX_{\beta}~to XβX_{\beta}. Consequently, the system (2.4) can be viewed as a fixed-point equation in XβX_{\beta} for the operator (N1,N2)(N_{1},N_{2}).

The next result establishes the equivalence between the fixed points of the operator (N1,N2)(N_{1},N_{2}) and the solutions of our mutual control problem.

Lemma 2.3.

A pair (x,y)Xβ\left(x,y\right)\in X_{\beta} is a solution of the mutual control problem (1.1)-(1.2), i.e., it satisfies both (2.2)\left(\ref{semi-observability}\right) and (2.3),\left(\ref{cc1}\right), if and only if it is a fixed point for the operator (N1,N2).\left(N_{1},N_{2}\right).

Proof.

The necessity has already been explained. For the sufficiency, assume (x,y)Xβ(x,y)\in X_{\beta} is a fixed point of (N1,N2)(N_{1},N_{2}), i.e., it satisfies (2.4). Letting t=0t=0 in the first relation, we derive the controllability condition (2.3). Next, applying the operator SA(t)S_{A}(t) to (2.3), we deduce

SA(t)x(0)+0TSA(ts)f(x(s),y(s))𝑑s𝑑s\displaystyle S_{A}(t)x(0)+\int_{0}^{T}S_{A}(t-s)f(x(s),y(s))ds\,ds
=kSA(tT)SB(T)β+k0TSA(tT)SB(Ts)g(x(s),y(s))𝑑s,\displaystyle=kS_{A}(t-T)S_{B}(T)\beta+k\int_{0}^{T}S_{A}(t-T)S_{B}(T-s)g(x(s),y(s))\,ds,

which, when used in the definition of N1N_{1} leads to the first relation of (2.2), i.e.,

x(t)=SA(t)x(0)+0tSA(ts)f(x(s),y(s))𝑑s,x(t)=S_{A}(t)x(0)+\int_{0}^{t}S_{A}(t-s)f(x(s),y(s))\,ds,

The second relation in (2.2) follows directly from the definition of N2N_{2}. ∎

2.3. Existence and uniqueness via Perov’s fixed point theorem

For our first existence result, we apply Perov’s fixed point theorem, which requires that ff and gg satisfy global Lipschitz conditions.

Theorem 2.4.

Assume that there are constants a,b,c,d0a,b,c,d\geq 0 such that

|f(x,y)f(x¯,y¯)|\displaystyle\left|f\left(x,y\right)-f\left(\overline{x},\overline{y}\right)\right| \displaystyle\leq a|xx¯|+b|yy¯|,\displaystyle a\left|x-\overline{x}\right|+b\left|y-\overline{y}\right|,
|g(x,y)g(x¯,y¯)|\displaystyle\left|g\left(x,y\right)-g\left(\overline{x},\overline{y}\right)\right| \displaystyle\leq c|xx¯|+d|yy¯|,\displaystyle c\left|x-\overline{x}\right|+d\left|y-\overline{y}\right|,

for all x,x¯,y,y¯n.x,\overline{x},y,\overline{y}\in\mathbb{R}^{n}. If there exists θ0\theta\geq 0 such that the matrix

(2.5) M(θ)=[TCA(a+kcCB)CA(b+kdCB)eθT1θcTCBdCB1eθTθ]M(\theta)=\left[\begin{array}[]{ll}TC_{A}\left(a+kcC_{B}\right)&C_{A}\left(b+kdC_{B}\right)\frac{e^{\theta T}-1}{\theta}\\[5.0pt] cTC_{B}&dC_{B}\frac{1-e^{-\theta T}}{\theta}\end{array}\right]

is convergent to zero, then the mutual control problem (1.1)-(1.2) has a unique solution in XβX_{\beta}.

Proof.

We apply Perov’s fixed point theorem to the operators N1,N2N_{1},N_{2} on the space X1=C([0,T];n)X_{1}=C([0,T];\mathbb{R}^{n}) and X2=Cβ([0,T];n)X_{2}=C_{\beta}([0,T];\mathbb{R}^{n}). The first space, X1X_{1}, is equipped with the uniform norm, while the second space, X2X_{2}, is equipped with a Bielecki norm θ\|\cdot\|_{\theta}, where θ\theta is given in the hypothesis. For any x,x¯C([0,T];n)x,\overline{x}\in C\left(\left[0,T\right];\mathbb{R}^{n}\right) and y,y¯Cβ([0,T];n)y,\overline{y}\in C_{\beta}\left([0,T];\mathbb{R}^{n}\right), one has

|N1(x,y)(t)N1(x¯,y¯)(t)|\displaystyle\left|N_{1}\left(x,y\right)\left(t\right)-N_{1}\left(\overline{x},\overline{y}\right)\left(t\right)\right| kCACB0T|g(x(s),y(s))g(x¯(s),y¯(s))ds|\displaystyle\leq kC_{A}C_{B}\int_{0}^{T}\left|g\left(x\left(s\right),y\left(s\right)\right)-g\left(\overline{x}\left(s\right),\overline{y}\left(s\right)\right)ds\right|
+CAtT|f(x(s),y(s))f(x¯(s),y¯(s))|𝑑s\displaystyle+C_{A}\int_{t}^{T}\left|f\left(x\left(s\right),y\left(s\right)\right)-f\left(\overline{x}\left(s\right),\overline{y}\left(s\right)\right)\right|ds
TCA(a+kcCB)xx¯\displaystyle\leq TC_{A}\left(a+kcC_{B}\right)\left\|x-\overline{x}\right\|
+CA(b+kdCB)0Teθseθs|y(s)y¯(s)|𝑑s,\displaystyle\quad+C_{A}\left(b+kdC_{B}\right)\int_{0}^{T}e^{\theta s}e^{-\theta s}\left|y(s)-\overline{y}(s)\right|ds,
a11xx¯+a12eθT1θyy¯,\displaystyle\leq a_{11}\left\|x-\overline{x}\right\|+a_{12}\frac{e^{\theta T}-1}{\theta}\left\|y-\overline{y}\right\|,

where

a11=TCA(a+kcCB) and a12=CA(b+kdCB).a_{11}=TC_{A}\left(a+kcC_{B}\right)\text{ and }a_{12}=C_{A}\left(b+kdC_{B}\right).

Therefore, taking the supremmum over t[0,T]t\in[0,T] yields

N1(x,y)N1(x¯,y¯)a11xx¯+a12eθT1θyy¯θ.\left\|N_{1}(x,y)-N_{1}(\overline{x},\overline{y})\right\|\leq a_{11}\|x-\overline{x}\|+a_{12}\frac{e^{\theta T}-1}{\theta}\|y-\overline{y}\|_{\theta}.

For the second operator N2N_{2}, for every t[0,T]t\in[0,T], we estimate

|N2(x,y)(t)N2(x¯,y¯)(t)|\displaystyle\left|N_{2}\left(x,y\right)\left(t\right)-N_{2}\left(\overline{x},\overline{y}\right)\left(t\right)\right| CB0t|g(x(s),y(s))g(x¯(s),y¯(s))|𝑑s\displaystyle\leq C_{B}\int_{0}^{t}\left|g\left(x\left(s\right),y\left(s\right)\right)-g\left(\overline{x}\left(s\right),\overline{y}\left(s\right)\right)\right|ds
cTCBxx¯+dCBetθ1θyy¯θ.\displaystyle\leq cTC_{B}\|x-\overline{x}\|+dC_{B}\frac{e^{t\theta}-1}{\theta}\|y-\overline{y}\|_{\theta}.

Multiplying both sides by eθte^{-\theta t} and taking the supremum over t[0,T]t\in[0,T], we obtain

N2(x,y)N2(x¯,y¯)θa21xx¯+a221eθTθyy¯θ,\left\|N_{2}\left(x,y\right)-N_{2}\left(\overline{x},\overline{y}\right)\right\|_{\theta}\leq a_{21}\|x-\overline{x}\|+a_{22}\frac{1-e^{-\theta T}}{\theta}\|y-\overline{y}\|_{\theta},

where

a21=cTCB and a22=dCB.a_{21}=cTC_{B}\text{ and }a_{22}=dC_{B}.

Writing the above relations in the vector form, one has

[N1(x,y)N1(x¯,y¯)N2(x,y)N2(x¯,y¯)θ]M(θ)[xx¯yy¯θ].\begin{bmatrix}\left\|N_{1}\left(x,y\right)-N_{1}\left(\overline{x},\overline{y}\right)\right\|\\[5.0pt] \ \left\|N_{2}\left(x,y\right)-N_{2}\left(\overline{x},\overline{y}\right)\right\|_{\theta}\end{bmatrix}\leq M(\theta)\begin{bmatrix}\left\|x-\overline{x}\right\|\\[5.0pt] \ \left\|y-\overline{y}\right\|_{\theta}\end{bmatrix}.

Consequently, since the matrix M(θ)M(\theta) is convergent to zero, the operator (N1,N2)(N_{1},N_{2}) is a Perov contraction on XβX_{\beta}. Thus, in view of Lemma 2.3, there exists a unique solution (x,y)Xβ(x,y)\in X_{\beta} for the problem (1.1)\left(\ref{se}\right)-(1.2).\left(\ref{cc0}\right).

2.4. Existence and localization via Schauder’s fixed point theorem

Our second result allows linear growths on the functions ff and gg instead of the Lipschitz coditions, but at the cost of losing the uniqueness of the fixed point for the operator (N1,N2)(N_{1},N_{2}).

Theorem 2.5.

Assume that ff and gg satisfy the following linear growth conditions

(2.6) |f(x,y)|\displaystyle\left|f(x,y)\right| a|x|+b|y|+γ,\displaystyle\leq a\left|x\right|+b\left|y\right|+\gamma,
|g(x,y)|\displaystyle\left|g(x,y)\right| c|x|+d|y|+δ,\displaystyle\leq c\left|x\right|+d\left|y\right|+\delta,

for all x,ynx,y\in\mathbb{R}^{n} and some nonnegative constants a,b,c,d,γ,δa,b,c,d,\gamma,\delta. If there exists θ0\theta\geq 0 such that the matrix M(θ)M\left(\theta\right) given in (2.5)\left(\ref{Matrice Mc}\right) is convergent to zero, then the mutual control problem (1.1)-(1.2) has at least one solution (x,y)Xβ(x,y)\in X_{\beta}, satisfying some bounds of the form

|x(t)|R1and|y(t)|etθR2,(t[0,T]),\left|x\left(t\right)\right|\leq R_{1}\quad\text{and}\quad\left|y\left(t\right)\right|\leq e^{t\theta}R_{2},\ \left(t\in[0,T]\right),

where R1R_{1} and R2R_{2} are chosen appropriately below.

Proof.

We apply Schauder’s fixed-point theorem to the operator (N1,N2)(N_{1},N_{2}) on the bounded, closed, and convex set BR1×BR2B_{R_{1}}\times B_{R_{2}}. The sets BR1B_{R_{1}} and BR2B_{R_{2}} are balls centered at the origin with radii R1R_{1} and R2R_{2} in the Banach space C([0,T];n)C([0,T];\mathbb{R}^{n}) with the uniform norm, and in the metric space Cβ([0,T];n)C_{\beta}([0,T];\mathbb{R}^{n}) with the Bielecki norm ||||θ\left||\cdot|\right|_{\theta}, respectively. Here, R1R_{1} and R2R_{2} are positive real numbers that will be determined later.

To this aim, we need to show that the operator (N1,N2)(N_{1},N_{2}) is completely continuous on XβX_{\beta}, and that it maps the set BR1×BR2B_{R_{1}}\times B_{R_{2}} into itself, i.e.,

(2.7) N1(x,y)R1,N2(x,y)θR2 whenever xR1,yθR2.\left\|N_{1}(x,y)\right\|\leq R_{1},\ \left\|N_{2}(x,y)\right\|_{\theta}\leq R_{2}\text{ \ whenever\ }\|x\|\leq R_{1},\ \,\|y\|_{\theta}\leq R_{2}.

Using standard arguments, the complete continuity is immediate (see, e.g., [13]).

Let (x,y)Xβ\left(x,y\right)\in X_{\beta}. Then,

N1(x,y)\displaystyle\left\|N_{1}\left(x,y\right)\right\| kCACB|β|+kCACB0T|g(x(s),y(s)|ds\displaystyle\leq kC_{A}C_{B}\left|\beta\right|+kC_{A}C_{B}\int_{0}^{T}\left|g\left(x(s\right),y\left(s\right)\right|ds
+CAtT|f(x(s),y(s)|ds\displaystyle\quad+C_{A}\int_{t}^{T}\left|f\left(x(s\right),y\left(s\right)\right|ds
TCA(a+kcCB)x+CA(b+kdCB)eθT1θyθ+η1,\displaystyle\leq TC_{A}\left(a+kcC_{B}\right)\left\|x\right\|+C_{A}\left(b+kdC_{B}\right)\frac{e^{\theta T}-1}{\theta}\left\|y\right\|_{\theta}+\eta_{1},

where

η1=CA(kCB|β|+kTCBδ+Tγ).\eta_{1}=C_{A}\left(kC_{B}\left|\beta\right|+kTC_{B}\delta+T\gamma\right).

Similarly,

(2.8) |N2(y,y)(t)|CB|β|+CB0t|g(x(s),y(s))|𝑑s.\left|N_{2}\left(y,y\right)\left(t\right)\right|\leq C_{B}\left|\beta\right|+C_{B}\int_{0}^{t}\left|g\left(x\left(s\right),y\left(s\right)\right)\right|ds.

Multiplying (2.8) by etθe^{-t\theta} and taking the supremum over [0,T][0,T] yields

|N2(y,y)(t)|cTCBx+dCB1eθTθyθ+η2,\left|N_{2}\left(y,y\right)\left(t\right)\right|\leq cTC_{B}\left\|x\right\|+dC_{B}\frac{1-e^{-\theta T}}{\theta}\left\|y\right\|_{\theta}+\eta_{2},

where

η2=CB|β|+TCBδ.\eta_{2}=C_{B}\left|\beta\right|+TC_{B}\delta.

Thus

[N1(x,y)N2(y,y)θ]M(θ)[xyθ]+[η1η2].\left[\begin{array}[]{c}\left\|N_{1}\left(x,y\right)\right\|\\[5.0pt] \left\|N_{2}\left(y,y\right)\right\|_{\theta}\end{array}\right]\leq M\left(\theta\right)\left[\begin{array}[]{c}\left\|x\right\|\\[5.0pt] \left\|y\right\|_{\theta}\end{array}\right]+\left[\begin{array}[]{c}\eta_{1}\\[5.0pt] \eta_{2}\end{array}\right].

Note that the invariance condition (2.7)\left(\ref{invarianta}\right)\ holds if R1,R2R_{1},R_{2} satisfy

M(θ)[R1R2]+[η1η2][R1R2],M\left(\theta\right)\left[\begin{array}[]{c}R_{1}\\[5.0pt] R_{2}\end{array}\right]+\left[\begin{array}[]{c}\eta_{1}\\[5.0pt] \eta_{2}\end{array}\right]\leq\left[\begin{array}[]{c}R_{1}\\[5.0pt] R_{2}\end{array}\right],

or equivalently

(2.9) [η1η2](IM(θ))[R1R2].\left[\begin{array}[]{c}\eta_{1}\\[5.0pt] \eta_{2}\end{array}\right]\leq\left(I-M\left(\theta\right)\right)\left[\begin{array}[]{c}R_{1}\\[5.0pt] R_{2}\end{array}\right].

Since the matrix M(θ)M\left(\theta\right) is convergent to zero, Lemma 1.1 guarantees that the matrix (IM(θ))1\left(I-M\left(\theta\right)\right)^{-1} has nonnegative entries. Thus, we can multiply (2.9)\left(\ref{eqmu}\right)\ with (IM(θ))1\left(I-M\left(\theta\right)\right)^{-1} without changing the sign of inequality. If we choose R1,R2R_{1},R_{2} large enough such that

(IM(θ))1[η1η2][R1R2],(I-M\left(\theta\right))^{-1}\begin{bmatrix}\eta_{1}\\[5.0pt] \eta_{2}\end{bmatrix}\leq\begin{bmatrix}R_{1}\\ R_{2}\end{bmatrix},

the invariance condition is satisfied. Consequently, Schauder’s fixed point theorem in BR1×BR2B_{R_{1}}\times B_{R_{2}} provides the result. ∎

2.5. Existence via Avramescu’s fixed point theorem

Assuming that both functions f,gf,g have linear growth, and gg is Lipschitz continuous with respect to the second variable y,y, we obtain the following existence result.

Theorem 2.6.

Assume there exists constants a,c,b,d,γ,δ0a,c,b,d,\gamma,\delta\geq 0 such that

|f(x,y)|a|x|+b|y|+γ,\displaystyle\left|f(x,y)\right|\leq a\left|x\right|+b\left|y\right|+\gamma,
|g(x,y)g(x,y¯)|d|yy¯|,\displaystyle\left|g(x,y)-g\left(x,\overline{y}\right)\right|\leq d\left|y-\overline{y}\right|,

and

|g(x,0)|c|x|+δ,\left|g\left(x,0\right)\right|\leq c\left|x\right|+\delta,

for all x,y,y¯nx,y,\overline{y}\in\mathbb{R}^{n}. If there exists θ0\theta\geq 0 such that the matrix M(θ),M\left(\theta\right), given in (2.5), is convergent to zero, then the mutual control problem (1.1)\left(\emph{\ref{se}}\right)-(1.2)\left(\emph{\ref{cc0}}\right) has at least one solution in Xβ.X_{\beta}.

Proof.

Clearly,

|f(x,y)|a|x|+b|y|+γ,\displaystyle\left|f(x,y)\right|\leq a\left|x\right|+b\left|y\right|+\gamma,
|g(x,y)|c|x|+d|y|+δ,\displaystyle\left|g(x,y)\right|\leq c\left|x\right|+d\left|y\right|+\delta,

for all x,ynx,y\in\mathbb{R}^{n}. Then, as in the proof of Theorem 2.5, we find R1,R2>0R_{1},R_{2}>0 such that

N1(x,y)R1and N2(x,y)θR2\left\|N_{1}(x,y)\right\|\leq R_{1}\ \ \ \text{and\ \ \ }\left\|N_{2}(x,y)\right\|_{\theta}\leq R_{2}

for all (x,y)Xβ\left(x,y\right)\in X_{\beta} with xR1\left\|x\right\|\leq R_{1} and yθR2\left\|y\right\|_{\theta}\leq R_{2}. Moreover, N1(BR1×BR2)N_{1}\left(B_{R_{1}}\times B_{R_{2}}\right) is relatively compact in C([0,T];n)C\left([0,T];\mathbb{R}^{n}\right). It remains to show that N2(x,)N_{2}(x,\cdot) is a contraction on BR2B_{R_{2}} with a contraction coefficient independent of xBR1x\in B_{R_{1}}. For any xC([0,T];n)x\in C\left([0,T];\mathbb{R}^{n}\right) and any y,y¯Cβ([0,T];n)y,\overline{y}\in C_{\beta}\left([0,T];\mathbb{R}^{n}\right), we have

|N2(x,y)(t)N2(x,y¯)(t)|dCB0t|y(s)y¯(s)|𝑑s,\left|N_{2}(x,y)(t)-N_{2}(x,\overline{y})(t)\right|\leq dC_{B}\int_{0}^{t}\left|y(s)-\overline{y}(s)\right|ds,

whence

N2(x,y)(t)N2(x,y¯)θdCB1eθTθyy¯θ,\left\|N_{2}(x,y)(t)-N_{2}(x,\overline{y})\right\|_{\theta}\leq dC_{B}\frac{1-e^{-\theta T}}{\theta}\left\|y-\overline{y}\right\|_{\theta},

where

dCB1eθTθ<1,dC_{B}\frac{1-e^{-\theta T}}{\theta}<1,

since the matrix M(θ)M(\theta) is convergent to zero. Thus, Avramescu’s theorem applies and gives the final conclusion. ∎

Remark 3.

Under the conditions of the previous theorem, if (x,y),(x,y¯)\left(x,y\right),\ \left(x,\overline{y}\right) are two solutions of the mutual control problem (1.1)\left(\ref{se}\right)-(1.2)\left(\ref{cc0}\right), then necessarily y=y¯.y=\overline{y}.

3. Conclusions

The concept of mutual control introduced in this paper and illustrated on the case of semi-linear systems of the first order with a final condition of proportionality of the states is likely to be used for various other classes of systems and other controllability conditions. In this context, problems of observability or partial observability can be formulated. The working technique, as suggested by this paper, is mainly based on the fixed point theory and could be completed by considering other abstract existence principles.

References

  • [1] M. Beldinski and M. Galewski, Nash type equilibria for systems of non-potential equations, Appl. Math. Comput. 385 (2020), 125456.
  • [2] A. Berman and R. J. Plemmons, Nonnegative matrices in the mathematical sciences, Academic Press, 1979.
  • [3] M. Coron, Control and nonlinearity, AMS, Providence, 2007.
  • [4] K. Deimling, Nonlinear functional analysis, Springer, Berlin, 1985.
  • [5] A. Granas, Fixed point theory, Springer, New York, 2003.
  • [6] P. Magal and S. Ruan, Theory and applications of abstract semilinear Cauchy problems, Springer, Berlin, 2018.
  • [7] I. Mezo, The Lambert W function: Its generalizations and applications, Chapman and Hall/CRC.
  • [8] S. Park, Generalizations of the Nash equilibrium theorem in the KKM theory, Fixed Point Theor. Appl. 2010 (2010), 234706.
  • [9] R. Precup, The role of matrices that are convergent to zero in the study of semilinear operator systems, Math. Comput. Model. 49 (2009), 703–708.
  • [10] R. Precup, Nash-type equilibria and periodic solutions to nonvariational systems, Adv. Nonlinear Anal. 3 (2014), no. 4, 197–207.
  • [11] R. Precup, A critical point theorem in bounded convex sets and localization of Nash-type equilibria of nonvariational systems, J. Math. Anal. Appl. 463 (2018), 412–431.
  • [12] R. Precup, On some applications of the controllability principle for fixed point equations, Results Appl. Math. 13 (2022), 100236.
  • [13] R. Precup, Methods in nonlinear integral equations, Springer, Dordrecht, 2002.
  • [14] R. Precup and A. Stan, Stationary Kirchhoff equations and systems with reaction terms, AIMS Mathematics 7 (2022), Issue 8, 15258–15281.
  • [15] R. Precup and A. Stan, Linking methods for componentwise variational systems, Results Math. 78 (2023), 1–25.
  • [16] A. Stan, Nonlinear systems with a partial Nash type equilibrium, Studia Univ. Babeş-Bolyai Math. 66 (2021), 397–408.
  • [17] A. Stan, Nash equilibria for componentwise variational systems, J. Nonlinear Funct. Anal. 6 (2023), 1–10.
  • [18] A. Stan, Localization of Nash-type equilibria for systems with partial variational structure, J. Numer. Anal. Approx. Theory 52 (2023), 253–272.
  • [19] I. I. Vrabie, C0C_{0}-Semigroups and applications, Elsevier, Amsterdam, 2003.
  • [20] J. Zabczyk, Mathematical control theory, Springer, Cham, 2020.
2024

Related Posts