Octavian Agratini (ICTP and Babeș-Bolyai University, Cluj-Napoca)

From the point of view of approximation theory, the sequence of operators associated to a function must converge to the approximated element. Usually, for continuous functions defined on a compact, uniform convergence takes place but for continuous functions defined on an unbounded interval only pointwise convergence occurs. Considering a general class of linear positive approximation processes designed using series, the goal of our talk is to present sufficient conditions which ensure uniform convergence on unbounded intervals. The results established can be applied to several particular cases, such as Mastroianni or Jain operators.

Sebastian Anița (Al. I. Cuza University, Iași)

We discuss the problem of minimizing the total cost of the damages produced by an alien predator population and of the regional control to reduce this population. The dynamics of the predators is described by a prey-predator system with
local/ nonlocal reaction terms. A sufficient condition for the null-stabilizability of predators is given interms of the sign of the principal eigenvalue of an appropriate operator that is not self-adjoint, and a stabilizing feedback control with a very simple structure is indicated. The minimization related to such a feedback control is treated for a closely related minimization problem viewed as a regional control problem.

This is joint work with Gabriel Dimitriu (University of Medicine and Pharmacy “Grigore T. Popa”).

Constantin Băcuță (University of Delaware, USA)

We present a least squares method for discretizing singularly perturbed  second order  elliptic problems. Choices for discrete stable spaces are considered for the mixed formulation and a preconditioned conjugate gradient iterative process for solving the saddle point  reformulation are proposed. We provide a preconditioning  strategy  that can be applied to a  large  class of mixed variational formulatations. Using  the concepts of optimal test norm and saddle point reformulation we  provide a robust discretization strategy  that works  for uniform and non-uniform refinements for Convection-Reaction-Diffusion problems.

Lori Badea (IMAR – Romanian Academy, Bucharest)

We consider that the subdomains of the domain decomposition are colored such that the subdomains with the same color do not intersect and introduce and analyze the
convergence of a damped additive Schwarz method related to such a subdomain coloring for the resolution of variational inequalities and equations. By introducing an assumption on the decomposition of the convex set of the variational inequality, we theoretically analyze, in a reflexive Banach space, the convergence of the damped additive Schwarz method. The introduced assumption contains a constant \(C_0\) and we explicitly write the expression of
the convergence rate, depending on the number of colors and the constant \(C_0\), and find the values of the damping constants which minimize it.
For problems in the finite element spaces, we estimate the constant \(C_0\) in concordance with the total number of subdomains. We show that the convergence rate, as a
function of the total number of subdomains has an upper limit which depends only on the number of the colors of the subdomains. Obviously, this limit is independent of the total number of subdomains.
Numerical results are in agreement with the theoretical ones. They have been performed for an elasto-plastic problem to verify the theoretical predictions concerning the choice of the damping parameter,the dependence of the convergence on the overlap parameter and on the number of subdomains.

Radu Bălan (University of Maryland, USA)

Among infinitely many factorizations \(A=V\cdot V^\ast\) of a psd matrix \(A\), we seek the factor \(V\) that has the smallest \((1,2)\) norm.

In this talk we review the origin of this problem as well as existing results regarding the optimal value.

We discuss also the conjecture that the squared \((1,2)\) norm of \(V\) is equivalent to the \((1,1)\) norm of \(A\).

Lucian Beznea (IMAR – Romanian Academy, Bucharest)

We present a general result characterising the scaling property of a Markov process in terms of the transition function, the associated resolvent of kernels, and of the generator. It is pointed out the case of a pure jump process. We prove that the weak solution to the stochastic differential equation of fragmentation for avalanches has a scaling property. We also emphasise a fractal property for this SDE of fragmentation. We discuss numerical results, obtained by Monte Carlo simulation, that confirm the validity of the scaling property we proved.

The talk is based on joint works with Madalina Deaconu (Nancy, France) and Oana Lupascu-Stamate (Bucharest, Romania).

Beniamin Bogoșel (École Polytechnique, Palaiseau, France)

It has been conjectured by Pólya and Szegö in 1951 that among \(n\)-gons with fixed area the regular one minimizes the first eigenvalue of the Dirichlet-Laplace operator. Despite its apparent simplicity, this result has only been proved for triangles and quadrilaterals. In this work we show that the proof of the conjecture can be reduced to finitely many certified numerical computations. Moreover, the local minimality of the regular polygon is reduced to a single validated numerical computation.

The steps of the proof strategy include the analytic computation of the Hessian matrix of the first eigenvalue, the stability of the Hessian with respect to vertex perturbations and analytic upper bounds for the diameter of an optimal set. Explicit a priori error estimates are given for the finite element computation of the eigenvalues of the Hessian matrix of the first eigenvalue associated to the regular polygon.

Results presented are obtained in collaboration with Dorin Bucur.

Liliana Borcea (University of Michigan, USA)

We study an inverse problem for the wave equation, concerned with estimating the wave speed from data gathered by an array of sources and receivers that emit probing signals and measure the resulting waves. The typical mathematical formulation of velocity estimation is a nonlinear least squares minimization of the data misfit, over a search velocity space. There are two main impediments to this approach, which manifest as multiple local minima of the objective function: The nonlinearity of the mapping from the velocity to the data, which accounts for multiple scattering effects, and poor knowledge of the kinematics (smooth part of the wave speed) which causes cycle-skipping. We show that the nonlinearity can be mitigated using a data driven estimate of the internal wave field. This leads to improved performance of the inversion for a reasonable initial guess of the kinematics.

Imre Boros (ICTP, Cluj-Napoca)

In recent years neural networks have revolutionized machine learning. Although relatively old technique, has allowed very spectacular advances in the recognition of texts, sounds, images and videos. Understanding the stakes of these methods raises questions at the interfaces between mathematics and algorithmics. In this presentation, I will explain the basic theory of these networks as well as the key concept of their supervised learning.

Radu Boţ (University of Vienna, Austria)

In this talk we address continuous in time dynamics as well as numerical algorithms for the problem of approaching the set of zeros of a single-valued monotone and continuous operator \(V\). The starting point of our investigations is a second order dynamical system that combines a vanishing damping term with the time derivative of $V$ along the trajectory, which can be seen as an analogous of the Hessian-driven damping in case the operator is originating from a potential. Our method exhibits fast convergence rates of order \(o \left( \frac{1}{t\beta(t)} \right)\) for \(\|V(z(t))\|\), where \(z(\cdot)\) denotes the generated trajectory and \(\beta(\cdot)\) is a positive nondecreasing function satisfiyng a growth condition, and also for the restricted gap function, which is a measure of optimality for variational inequalities. We also prove the weak convergence of the trajectory to a zero of \(V\).

Temporal discretizations of the dynamical system generate implicit and explicit numerical algorithms, which can be both seen as accelerated versions of the Optimistic Gradient Descent Ascent (OGDA) method for monotone operators, for which we prove that the generated sequence of iterates \((z_k)_{k \geq 0}\) shares the asymptotic features of the continuous dynamics. In particular we show for the implicit numerical algorithm convergence rates of order \(o \left( \frac{1}{k\beta_k} \right)\) for \(\|V(z^k)\|\) and the restricted gap function, where \((\beta_k)_{k \geq 0}\) is a positive nondecreasing sequence satisfying a growth condition. For the explicit numerical algorithm we show by additionally assuming that the operator \(V\) is Lipschitz continuous convergence rates of order \(o \left( \frac{1}{k} \right)\) for \(\|V(z^k)\|\) and the restricted gap function. All convergence rate statements are last iterate convergence results; in addition to these we prove for both algorithms the convergence of the iterates to a zero of $V$. Numerical experiments indicate the overwhelming superiority of our explicit numerical algorithm over other methods designed to solve monotone equations governed by monotone and Lipschitz continuous operators.

The talk relies on a joint work with E.R. Csetnek and D.-K. Nguyen.

Renata Bunoiu (University of Lorraine, France)

We present a method for the localization of solutions for a class of nonlinear problems arising in periodic homogenization. This method combines concepts and results from the linear theory of PDEs, linear periodic homogenization theory, and nonlinear functional analysis. In particular, we use the Moser-Harnack inequality, arguments of fixed point theory and Ekeland’s variational principle. A significant gain in the homogenization theory of nonlinear problems is that our method makes possible the emergence of finitely or infinitely many solutions. This study is motivated by real-world applications in physics, engineering and biology.

Coralia Carțiș (University of Oxford, UK)

We present subspace embedding properties for hashing/count-sketch matrices that are optimal in the projection dimension of the sketch. A diverse set of results are presented that address the case when the input matrix has sufficiently low coherence; how this coherence changes with the number of column nonzeros (allowing a scaling of the coherence bound), or is reduced through suitable transformations (when considering hashed- instead of subsampled- coherence reducing transformations such as randomised Hadamard). We then discuss the application of these and other sketching results to optimization algorithms: improving on the efficiency of Blendenpik for linear-least squares; and on the efficiency and complexity of random subspace methods for nonconvex optimization.

Emil Cătinaş (ICTP, Cluj-Napoca)


Given \( x_k \rightarrow x^\ast\in {\mathbb R}\), with errors \( e_k: = |x^\ast – x_k| \), the high convergence orders \(p_0>1\) of this sequence are defined either by the well known, classical formula
\[
\lim\limits_{k\rightarrow \infty}\textstyle\frac{e_{k+1}}{e_k
^{p_0}}=Q\in(0,\infty),
\]
or by
\[
\lim\limits_{k\rightarrow \infty} \frac{\ln e_{k+1}}{\ln e_k}, \quad
\lim\limits_{k\rightarrow \infty} \frac{\ln \frac{e_{k+2}}{e_{k+1}}}{\ln
\frac{e_{k+1}}{e_k}}, \quad \lim\limits_{k\rightarrow \infty}\big|\ln
e_k\big|^{\frac 1k}, \ \ etc.
\]

Such simple definitions may suggest that they have a long history, and their connections are well established, but in fact this is not the case. In this talk we present a short overview of the convergence orders and show some relations.

References
[1] E. Cătinaş, How many steps still left to x*?, SIAM Rev., (2021)

[2] E. Cătinaş, A survey on the high convergence orders and computational
convergence orders of sequences, Appl. Math. Comput., (2019)

other manuscripts

Teodora Cătinaş (Babeş-Bolyai University, Cluj-Napoca)

The method of Shepard is a method for interpolation of large scattered data sets; unfortunately, it has no good reproduction qualities and it has high computational cost. The combined Shepard operators diminish the mentioned drawbacks. We present two types of combined Shepard operator of Bernoulli type, their main properties and some error bounds.

We also use some modified Shepard-Bernoulli type operators for modeling and visualization of scattered data that preserve some constraints. We illustrate the properties by some numerical examples.

Radu Cîmpeanu (University of Warwick, UK)

The canonical framework of drop impact provides excellent opportunities to co-develop experimental, analytical and computational techniques in a rich multi-scale context. The talk will represent a journey across parameter space, as we address beautiful phenomena such as bouncing, coalescence and splashing, with a particular focus on scientific computing aspects and associated numerical methods.

To begin with, consider millimetric drops impacting a deep bath of the same fluid that are generated using a custom syringe pump connected to a vertically-oriented needle. Measurements of the droplet trajectory are compared directly to the predictions of a quasi-potential model, as well as fully resolved unsteady Navier-Stokes direct numerical simulations (DNS). Both theoretical techniques resolve the time-dependent bath interface shape, droplet trajectory, and droplet deformation. In the quasi-potential model (building on recent progress by Galeano-Rios et al., JFM 912, 2021), the droplet and bath shape are decomposed using orthogonal function decompositions leading to a set of coupled damped linear oscillator equations solved using an implicit numerical method. The underdamped dynamics of the drop are directly coupled to the response of the bath through a single-point kinematic match condition, which we demonstrate to be an effective and efficient technique. The hybrid methodology has allowed us to unify and resolve interesting outstanding questions on the rebound dynamics of the multi-fluid system.

We then shift gears towards the much more violent regime of high-speed impact resulting in splashing, where a combination of matched asymptotic expansions grounded in Wagner theory and DNS allow us to produce theoretical predictions for the location and velocity of the ejected liquid jet, as well as its thickness (Cimpeanu and Moore, JFM 856, 2019). While the early-time analytical methodology neglects effects such as surface tension or viscosity (focusing on inertia instead), corrections and adaptations of the technique (Moore et al., JFM 882, 2020) will also be presented and validated against an associated computational framework, bringing us even closer to efficiently providing information of interest for applications such as inkjet printing and pesticide distribution.

Nicolae Cîndea (University of Clermont Auvergne, France)

We study the boundary controllability of the linear elasticity system reformulated as a first-order system in both space and time. Using the observability inequality known for the usual second-order elasticity system, we deduce an equivalent observability inequality for the associated first-order system. Then, the control of minimal \(L^2\)-norm can be found as the solution to a space-time mixed formulation. This first-order framework is particularly interesting from a numerical perspective since it is possible to solve the space-time mixed formulation using only piecewise linear \(C^0\)-finite elements. Numerical simulations illustrate the theoretical results.

These results are obtained in a joint work with Arthur Bottois.

Maria Crăciun (ICTP, Cluj-Napoca)

We review some aspects concerning the modelling of astronomical time series relying on observational data on variable stars. A method for obtaining better estimates for some of the preliminary parameters is discussed. This method was tested by generating realistic synthetic data (with uneven sampling, time gaps and additive noise). The analysis methodology of the uncertainties of the optimal parameters for the final model is presented.

Dan Crișan (Imperial College, UK)

This talk covers some recent work on developing particle filters based data assimilation methodology for high dimensional fluid dynamics models.

The algorithm presented here is a particle filter with a so-called ”nudging” mechanism. The nudging procedure is used in the prediction step. In the absence of nudging, the particles have trajectories that are independent solutions of the model equations. The nudging presented here consists in adding a drift to the trajectories of the particles with the aim of maximising the likelihood of their positions given the observation data. This introduces a bias in the system that is corrected during the resampling step. The nudging procedure is theoretically justified through a standard convergence argument.

The corresponding Data Assimilation algorithm presented gives an asymptotically (as the number of particles increases) consistent approximation of the posterior distribution of the state given the data. The methodology is tested on a two-layer quasi-geostrophic model for a beta-plane channel flow with \({\mathcal O}(10^6)\) degrees of freedom out of  which only a minute fraction are noisily observed. I will present the effect of the nudging procedure on the performance of the data assimilation procedure for a reduced model in terms of the accuracy and uncertainty of the results. The results presented here are incorporated in [1] and [2]. The talk is based on the papers:

[1] C Cotter, D Crisan, D Holm, W Pan, I Shevchenko, Data assimilation for a quasi-geostrophic model with circulation-preserving stochastic transport noise,  Journal of Statistical Physics, 1-36, 2020.

[2] D Crisan, I Shevchenko, Particle filters with nudging, work in progress.

Victoriţa Dolean Maini (University of Strathclyde, UK)

GenEO (‘Generalised Eigenvalue problems on the Overlap’) [1] is a method for computing an operator-dependent spectral coarse space to be combined with local solves on subdomains to form a robust parallel domain decomposition preconditioner for elliptic PDEs. It has previously been proved, in the self-adjoint and positive-definite case, that this method, when used as a preconditioner for conjugate gradients, yields iteration numbers which are completely independent of the heterogeneity of the coefficient field of the partial differential operator. We extend this theory to the case of convection–diffusion–reaction problems (class in which we also include low-frequency Helmholtz problems), which may be non-self-adjoint and indefinite, and whose discretisations are solved with preconditioned GMRES. The GenEO coarse space is defined here using a generalised eigenvalue problem based on a self-adjoint and positive-definite subproblem. We obtain GMRES iteration counts which are independent of the variation of the coefficient of the diffusion term in the operator and depend only very mildly on the variation of the other coefficients. While the iteration number estimates do grow as the non-self-adjointness and indefiniteness of the operator increases, practical tests indicate the deterioration is much milder. Thus we obtain an iterative solver which is efficient in parallel and very effective for a wide range of convection–diffusion– reaction problems.
At the end we present numerical results on more performant methods of the same type [2] for which no theory is available.

References
[1] N. Spillane, V. Dolean, P. Hauret, F. Nataf, C. Pechstein, R. Scheichl, Abstract robust coarse spaces for systems of PDEs via generalized eigenproblems in the overlaps, Numer. Math. 126 (4) (2014) 741–770.
[2] N. Bootland, V. Dolean, P. Jolivet, P.-H. Tournier, A comparison of coarse spaces for Helmholtz problems in the high frequency regime, Computers and Mathematics with Applications, Vol 98, pp 239-253, 2021.

Andrei Drăgănescu (University of Maryland Baltimore County, USA)

We devise multigrid preconditioners for linear-quadratic space-time distributed parabolic optimal control problems. While our method is rooted in earlier work on elliptic control, the temporal dimension presents new challenges in terms of al- gorithm design and quality. Our primary focus is on the \(cG(s)dG(r)\) discretizations which are based on functions that are continuous in space and discontinuous in time, but our technique is applicable to various other space-time finite element dis- cretizations. We construct and analyse two kinds of multigrid preconditioners: the first is based on full coarsening in space and time, while the second is based on semi-coarsening in space only. Our analysis, in conjunction with numerical experiments, shows that both preconditioners are of optimal order with respect to the discretization in case of \(cG(1)dG(r)\) for \(r = 0, 1\), and exhibits a suboptimal behavior in time for Crank-Nicolson. We also show that, under certain conditions, the preconditioner using full space-time coarsening is more efficient than the one involving semi-coarsening in space, a phenomenon that has not been observed previously. Our numerical results confirm the theoretical findings.

Ionuț Farcaș (University of Texas at Austin, USA)

We present a method for enhancing data-driven reduced-order modeling with a preprocessing step in which the training data are filtered prior to training the reduced model. Filtering the data prior to training has a number of benefits for data-driven modeling: it attenuates (or even eliminates) wavenumber or frequency content that would otherwise be difficult or impossible to capture with the reduced model, it smoothens discontinuities in the data that would be difficult to capture in a low-dimensional representation, and it reduces noise in the data. This makes the reduced modeling learning task numerically better conditioned, less sensitive to numerical errors in the training data, and less prone to overfitting when the amount of training data is limited.

We first illustrate the effects of filtering in one-dimensional advection and inviscid Burgers’ equations. We then consider large-scale rotating detonation rocket engine simulations with millions of spatial degrees of freedom for which only a few hundred down-sampled training snapshots are available. A reduced-order model is derived from these snapshots using operator inference. Our results indicate the potential benefits of filtering to reduce overfitting, which is particularly important for complex physical systems where the amount of training data is limited.

Distribution Statement A: Approved for Public Release; Distribution is Unlimited. PA# AFRL-2022-4312

Silviu Filip (INRIA Rennes, France)

Sorin Gal, Ionuț Iancu (University of Oradea)

The main aim of this talk is to introduce mixed operators between Choquet integral operators and max-product operators of Bernstein-Kantorovich type and to study their quantitative approximation properties.

We show that for large classes of functions, these nonlinear max-product Kantorovich-Choquet type operators give essentially better approximation orders than their linear classical correspondents.

Călin-Ioan Gheorghiu (ICTP, Cluj-Napoca)

We are concerned with the conventional Chebyshev collocation method (ChC) as well as with the MATLAB-object oriented implementation of this method as Chebfun, in order to solve the following elliptic nonlinear b. v. p.
\begin{equation}
\left\{
\begin{array}{c}
\Delta u+\lambda \exp \left( \frac{au}{a+u}\right) =0,\ x\in \Omega , \\
u=0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x\in \partial \Omega ,%
\end{array}%
\right. (1)
\end{equation}
where \(\lambda >0,\ a>0\) and \(\Omega\) is a bounded domain from \({{\mathbb R}}^{n},\ n=1,2,3.\) When \(a\rightarrow \infty \) the problem is the so called
Bratu’s problem. For the bifurcation diagram for (1) when \(n:=1\) we refer to [1] and when \(n:=2,3\) we refer to [2].
Of special concern is to simulate numerically the relationship between the multiplicity of solutions to (1) and the dimension \(n\). We also compute the maximal value \(\lambda^*,\) of the parameter \(\lambda,\) for which the problem admits a unique solution!

First of all we compute very accurately these bifurcation values and extend our numerical results reported in [3].
Then we compute a large set of solutions for all three values of $n$ including the radial case when the problem reduces to a mixed b. v. p. attached to a second order nonlinear differential equation.
For \(n:=2\) we additionally solve the problem when the domain \(\Omega\) is the unit square.

In each case we determine the order of Newton’s method in solving the nonlinear algebraic systems of discretization and observe that the convergence of Chebfun outcomes is generally exponential and much accurate (twice as accurate) than those provided by ChC.

References

[1] Huang, S-Y., Wang, S-H.: Proof of a Conjecture for the One-Dimensional Perturbed Gelfand Problem from Combustion Theory. Arch. Rational Mech. Anal. 222, 769–825 (2016).
[2] Jacobsen, J., Schmitt, K., The Liouville-Bratu-Gelfand Problem for Radial Operators, J. Differ. Equations 184, 283–-298 (2002)
[3] Gheorghiu, C. I., Accurate Spectral Collocation Solutions to some Bratu’s Type Boundary Value Problems, arXiv:2011.13212v1

Ion Victor Goșea (Max Planck Institute, Magdeburg, Germany)

Despite the increased development of high-performance computing (HPC) environments, the dynamical models used in practical simulations inherit a high complexity that must be handled carefully (due to the discretization schemes). Therefore, the approximation of large-scale dynamical systems is essential for serving the scope of efficient time-domain simulation, optimization tasks, or deriving control laws. The technique for reducing the complexity is known as model order reduction (MOR). There are many ways of reducing large-scale models, and each method is tailored to specific applications and goals for complexity reduction. Methods such as balanced truncation (BT), moment matching (MM), or proper orthogonal decomposition (POD) have been extensively used for constructing surrogate models of low order that approximate well the original dynamics. However, many such methods are intrusive, in the sense that they require explicit access to the original model (to the system’s matrices). On the other hand, with the ever-increasing availability of data, i.e., measurements related to the original model (either in the time domain or in the frequency domain), non-intrusive techniques related to machine learning (ML) development have recently been proposed to learn the hidden model. In this contribution, we aim at introducing a data-driven approach that is used to learn/discover reduced-order dynamical models from frequency-response measurements. In the MOR community, this approach is known as the Loewner framework (LF) and was introduced 15 years ago by Mayo and Antoulas. This talk is meant as an LF tutorial for researchers working outside the MOR world. We show the main building blocks of the method, with particular emphasis on the numerical linear algebra aspects. Additionally, if time permits, we intend to show recent extensions of LF and particular applications to real-world problems.

Vasile Grădinaru (ETH Zurich, Switzerland)

The quantum dynamics of nuclei is governed by the time dependent Schrödinger equation. We discus shortly the model and give an overview of the challenges and chances that various spectral methods in space together with appropriate splittings in time offer in this context.

Eduard Grigoriciuc (ICTP, Cluj-Napoca)

It is known that, in general, a convex combination of two univalent functions on the unit disc in \(\mathbb{C}\) is not necessary univalent. Starting from this point, we present some classical results that solve (partially) the problem of univalence in \(\mathbb{C}\). Moreover, we can extend some of these results for the case of several complex variables. Together with the theoretical results, we present some numerical examples of starlike (respectively, convex) functions obtained as convex combinations of univalent functions in \(\mathbb{C}\). Finally, we discuss about a numerical approach to the Hele-Shaw flow problem.

Sever Hîrștoagă (INRIA Paris, France)

We solve numerically multi-scale in time Vlasov-type models, by using a specific version of the parareal algorithm. More precisely we use for the coarse solving reduced models obtained from the two-scale asymptotic expansion method. The reduced models are useful to approximate the original Vlasov model at a low computational cost since they are free of high oscillations. We illustrate this strategy with numerical experiments based on long time simulations of charged particle beams in an electromagnetic field. We provide an analysis of the efficiency of the parareal algorithm in terms of speedup.

Liviu Ignat (IMAR – Romanian Academy, Bucharest)

In this talk we present some recent result about the long time behavior of the solutions for some numerical approximation schemes for Burgers’ equations. We start by presenting some scaling methods that permit to obtain the long time behaviour of the solutions to some nonlocal evolution problems. Once this method is clarified in the continuous framework we adapt it to obtain similar results for the solutions of some classical numerical schemes for Burgers’ equation: Lax-Friedrichs, Engquist-Osher and Godunov. We prove that, at the numerical level, the large time dynamics depends on the amount of numerical viscosity introduced by the scheme: while Engquist-Osher and Godunov yield the same N-wave asymptotic behavior, the Lax-Friedrichs scheme leads to viscous self-similar profiles, corresponding to the asymptotic behavior of the solutions of the continuous viscous Burgers’ equation.

Acknowledgment: this is a joint work with Alejandro Pozo (Bilbao) and Enrique Zuazua (Madrid & Bilbao & Erlangen).

Traian Iliescu (Virginia Tech, USA)

In this talk, I will survey reduced order model (ROM) closures for under-resolved turbulent flows. Over the past decade, several closure strategies have been developed to tackle the ROM inaccuracy in the convection-dominated, under-resolved regime, i.e., when the number of degrees of freedom is too small to capture the complex underlying dynamics. I will survey three classes of ROM closures, i.e., correction terms that increase the ROM accuracy: (i) functional closures, which are based on physical insight; (ii) structural closures, which are developed by using mathematical arguments; and (iii) data-driven closures, which leverage available data. Throughout my talk, I will highlight the impact made by data on classical numerical methods over the past decade. I will also emphasize the role played by physical constraints in data-driven modeling of ROM closures.

Stelian Ion (ISMMA – Romanian Academy, Bucharest)

In this talk we present a method to perform a multi resolution analysis of 2D scatter data. The method is intended for multiple purposes and multiple usages of data and does not assume regularity concerning the spatial distribution of data. The method consists of three steps: 1) by using a segmentation algorithm, one partitions the data in a block structured data; 2) for each block data of the partition one defines, by using a proper interpolation algorithm, a everywhere block defined function; 3) the block defined functions are projected in the space of the wavelets cubic spline functions at a proper level. The first step is designated to manage large data while the second and third steps deal with data denoising.The cubic wavelet spline approximation allows one to reduce the data cardinality and to speed up the data based calculus. Some numerical test concerning the method and other related facts are also presented.

Joint work with Dorin Marinescu and Stefan-Gicu Cruceanu

Petru Jebelean (West University of Timișoara)

We discuss the multiplicity of periodic solutions to potential differential systems of type
\[-\left(\frac{u’}{\sqrt{1- |u’|^2}} \right)’= \lambda \nabla G(u) \mbox{ in }[0,T]
\]
and to difference equations having the form
\[-\Delta \left(\frac{\Delta u(n-1)}{\sqrt{1 – |\Delta u(n-1)|^2}}\right) = \lambda g(u(n))\quad (n \in \mathbb{Z}).\]
Here \(G:\mathbb{R}^N \to \mathbb{R}\) is even, of class \(C^1,\, g:\mathbb{R}\to \mathbb{R}\) is a continuous odd function and \(\lambda\) is a positive parameter. The approach is variational and relies on critical point theory for convex, lower semicontinuous perturbations of \(C^1\)-functionals.

Based on joint work with Călin Șerban.

Nataliia Kolun (Odessa Military Academy, Ukraine & Babeș-Bolyai University)

We are concerned with positive solutions for the Dirichlet boundary value problem for equations and systems of Kirchhoff type. We obtain existence and localization results of positive solutions using Krasnosel’skii’s fixed point theorem in cones and a weak Harnack type inequality. The localization is given in terms of energy norm, being of interest from a physical point of view. In the case of systems, the results on the localization are established componentwise using the vector version of Krasnosel’skii’s theorem, which allows some of the equations of the system to satisfy the compression condition and others the expansion one.

Cătălin Lefter (Al.I. Cuza University and Octav Mayer Institute, Iași)

We consider systems of parabolic equations coupled in zero order terms.

Controllability problems with controls acting only in part of the equations of the system, as well as inverse source problems through observations only on some components of the system use as essential ingredients appropriate Carleman estimates.

In our talk we discuss various aspects concerning the deduction and the use of these estimates.

Daniel Loghin (University of Birmingham, UK)

Large scale problems are commonly solved using iterative methods. These methods are usually combined with preconditioning techniques aimed at rendering the solver performance optimal, i.e., independent of problem size, possibly also independent of other problem parameters. In the case of problems arising from the discretisation of PDE, the design of an efficient preconditioner is linked to the choice of partial differential operator. In particular, a suitable inclusion of the boundary operator in the preconditioning technique is essential. While this is well understood for simple (scalar) PDE, for complex applications this is not always a straightforward task. This is the case of, for example, boundary control problems or problems coupled at an interface.

In this talk I will discuss some classes of problems where a suitable choice of boundary preconditioner ensures the optimal performance of the iterative solver. Analysis will be presented, together with validating numerical experiments.

Andra Malina (ICTP, Cluj-Napoca)

We obtain new univariate Shepard operators using polynomials that are constructed such that they fit the interpolation data in a weighted least squares approximation way. We study the degree of exactness, the linearity and the remainder for the corresponding interpolation formula. Finally we compare the numerical results with the ones obtained using other combined Shepard operators.

Liviu Marin (University of Bucharest and ISMMA – Romanian Academy)

We study the recovery of the missing discontinuous/non-smooth thermal boundary conditions on an inaccessible portion of the boundary of the domain occupied by a solid from Cauchy data prescribed on the remaining boundary assumed to be accessible, in case of stationary anisotropic heat conduction with non-smooth/discontinuous conductivity coefficients. This inverse boundary value problem is ill-posed and, therefore, should be regularized. Consequently, a stabilising method is developed based on a priori knowledge on the solution to this inverse problem and the smoothing feature of the direct problems involved. The original problem is transformed into a control one which reduces to solving an appropriate minimisation problem in a suitable function space. The latter problem is tackled by employing an appropriate variational method which yields a gradient-type iterative algorithm that consists of two direct problems and their corresponding adjoint ones. This approach yields an algorithm designed to approximate specifically merely \(\mathrm{L}^2\)-boundary data in the context of a non-smooth/discontinuous anisotropic conductivity tensor, hence both the notion of solution to the direct problems involved and the convergence analysis of the approximate solutions generated by the algorithm proposed require special attention. The numerical implementation is realised for 2D anisotropic solids using the FEM, whilst regularization is achieved by terminating the iteration according to two stopping criteria.

Joint work with Mihai Bucataru (University of Bucharest & ISMMA-AR) and Iulian Cîmpean (University of Bucharest & IMAR)

Sorin Micu (University of Craiova and ISMMA – Romanian Academy)

We analyze a method for the approximation of exact controls of a second order infinite dimensional system with bounded input operator. The algorithm combines Russell’s “stabilizability implies controllability” principle with the Galerkin method. The main feature brought in by this scheme consists in allowing precise error estimates. A numerical viscosity is introduced in order to improve the convergence rates previously obtained in the literature.

Cornel Murea (Université de Haute Alsace, France)

This is a joint work with Dan Tiba (Institute of Mathematics, Romanian Academy, dan.tiba@imar.ro).

We study the penalized steady Navier-Stokes with Neumann boundary conditions system in a holdall domain, its approximation properties (with error estimates) and the uniqueness of the solution that is obtained in a non standard manner. Numerical tests are presented.

Mihai Nechita (ICTP, Cluj-Napoca)

We consider the unique continuation problem for a stationary convection–diffusion equation, with data given in an interior subset of the domain and no boundary conditions.
For this ill-posed problem, we first discuss conditional stability estimates that are explicit in the physical parameters.

Casting the problem as PDE-constrained optimisation, we present a finite element method based on a discretise-then-regularise approach. The regularisation is based on penalising the jumps of the gradient across the interior faces of the finite element triangulation.

When diffusion dominates, we prove convergence rates by applying the continuum stability estimates to the approximation error and controlling the residual through stabilisation.
When convection dominates, we perform a local analysis and obtain weighted error estimates with quasi-optimal convergence along the characteristics of the convective field through the data set.

The talk is based on the papers:
[1] E. Burman, M. Nechita, L. Oksanen, A stabilized finite element method for inverse problems subject to the convection–diffusion equation. I: diffusion-dominated regime, Numer. Math., 144:451–477, 2020.
[2] E. Burman, M. Nechita, L. Oksanen, A stabilized finite element method for inverse problems subject to the convection–diffusion equation. II: convection-dominated regime, Numer. Math., 150:769–801, 2022.

Ion Necoară (Politehnica University, Bucharest)

We consider large-scale composite optimization problems having the objective function formed as a sum of two terms (possibly nonconvex), one has block coordinate-wise Lipschitz continuous gradient and the other is differentiable but nonseparable. Under these general settings we derive and analyze two new random coordinate descent methods. The first algorithm, referred to as random coordinate proximal gradient method, considers the composite form of the objective function, while the other algorithm disregards the composite form of the objective and uses the partial gradient of the full objective, yielding a random coordinate gradient descent scheme with novel adaptive stepsize rules. We prove that these new stepsize rules make the random coordinate gradient scheme a descent method, provided that additional assumptions hold for the second term in the objective function. We also present a complete worst-case complexity analysis for these two new methods in both, convex and nonconvex settings. Preliminary numerical results also confirm the efficiency of our two algorithms on practical problems.

Claudia Negulescu (Université Paul Sabatier, Toulouse, France)

The main concern of the present talk is the study of the multi-scale dynamics of thermonuclear fusion plasmas via a multi-species Fokker-Planck kinetic model.
One of the goals is the generalization of the standard Fokker-Planck collision operator to a multi-species one, conserving mass, total momentum and energy, as well as satisfying Boltzmann’s H-theorem. Secondly, we perform on one hand a mathematical asymptotic limit, letting the electron/ion mass ratio converging towards zero, to obtain a thermodynamic equilibrium state for the electrons (adiabatic regime), whereas the ions are kept kinetic. On the other hand, we develop a first numerical scheme, based on a Hermite spectral method, and perform numerical simulations to investigate in more details this asymptotic limit.

Maria Neuss-Radu (Friedrich Alexander Universitaet, Erlangen-Nuernberg, Germany)

We derive an effective model for transport processes in periodically perforated elastic media, taking into account, e.g., cyclic elastic deformations as they occur in lung tissue due to respiratory movement. The underlying microscopic problem couples the deformation of the domain with a diffusion process within a mixed Lagrangian/Eulerian formulation. After a transformation of the diffusion problem onto the fixed domain, we use the formal method of two-scale asymptotic expansion to derive the upscaled model, which is nonlinearly coupled through effective coefficients.

The effective model is implemented and validated using an application-inspired model problem. Numerical solutions for both, cell problems and macroscopic equations, are investigated and interpreted. We use simulations to qualitatively determine the effect of the deformation on the transport process.

This research is supported by SCIDATOS (Scientific Computing for Improved Detection and Therapy of Sepsis), a collaborative project funded by the Klaus Tschira Foundation, Germany. The results are obtained in collaboration with J. Knoch and N. Neuss (FAU Erlangen-Nürnberg)  and M. Gahn (University of Heidelberg).

Constantin P. Niculescu (University of Craiova)

The main result is as follows:

Suppose that \(E\) is an ordered Banach space and \(f:\mathbb{R}%
_{+}\rightarrow E\) is a continuous \(3\)-convex function in the sense of
Popoviciu. Then \(f\) verifies the functional inequality
\[
f\left( x\right) +f\left( y\right) +f\left( z\right) +f\left(
x+y+z\right) \geq f\left( x+y\right) +f\left( y+z\right) +f\left(
z+x\right) +f(0)
\]
for all \(x,y,z\in\mathbb{R}_{+}.\)

If, in addition, \(f\) is nondecreasing and concave, then
\[
f\left( \left\vert x\right\vert \right) +f\left( \left\vert y\right\vert
\right) +f\left( \left\vert z\right\vert \right) +f\left( \left\vert
x+y+z\right\vert \right) \geq f\left( \left\vert x+y\right\vert \right)
+f\left( \left\vert y+z\right\vert \right) +f\left( \left\vert
z+x\right\vert \right) +f(0)
\]
for all \(x,y,z\in\mathbb{R}.\)

This extends a result due to Paul Ressel. See his paper, The Hornich–Hlawka inequality and Bernstein functions, J. Math. Inequal. 9 (2015), 883-888.

Diana Otrocol (ICTP and Technical University of Cluj-Napoca)

By a new variant of fibre contraction principle we give existence, uniqueness and convergence of successive approximations results for some functional equations. In the case of ordered Banach space, Gronwall-type and comparison-type results are also given.

The talk is based on a joint work with V. Ilea, I.A. Rus and M. Șerban.

Cosmin Petra (Lawrence Livermore National Laboratory, USA)

Mathematical optimization models known as optimal power flow are solved daily to identify electricity generation levels and network flows that satisfies demand at minimum cost while enforces a diverse range of transmission, generation, security, and other constraints. Given the mathematical difficulties (e.g., nonconvexity, nonlinearity, and nonsmoothness) and large, nation-wide geographical scale of electric power grids, operational optimal power flow problems have challenged the state-of-the-art in computational optimization and high-performance computing for decades. Our approach leverages newly unveiled weak-concavity properties of certain OPF submodels to devise a simplified bundle SQP optimization algorithm with convergence guarantees. We discuss how the algorithm achieves efficient task-based computational parallelism and show that it can handle a wide range of computing platforms, including hardware accelerators such as GPUs. Performance of our software stack on realistic and real-world OPF problems obtained during our participation in the 2019-2020 ARPA-E Grid Optimization Challenge 1 Competition will be also presented.

Sorin Pop (Hasselt University, Belgium)

We consider a mathematical model for unsaturated flow in a two-dimensional porous medium, in which a fracture is present. The fracture is viewed as a long, thin sub-domain between two blocks. The fracture may have a variable aperture. For each of the blocks adjacent to the fracture, the width and the length are of comparable size, their ratio being of order one. The flow in the fracture and in the matrix blocks is governed by the Richards equation, which is a nonlinear, possibly degenerate parabolic equation. At the common interfaces, the models in each sub-domain (in the matrix blocks and in the fracture) are coupled by transmission conditions reflecting the continuity of the normal flux and of the pressure. After discussing the existence of a weak solution for the equi-dimensional model with a fixed fracture width, we analyse the convergence towards effective models in the limit case, when \(\varepsilon\), the ratio between the fracture width and its length goes to 0. Depending on how the ratio of different parameters (porosity and permeability) in the sub-domains scales with \(\varepsilon\), we derive different effective models, in which the fracture reduces to a one-dimensional object between two adjacent blocks. We also present numerical simulations that confirm the theoretical upscaling results.

This is a joint work with Florian List (University of Vienna), Kundan Kumar and Florin Radu (University of Bergen), Koondanibha Mitra and Tobias Koeppl (Hasselt University).

Radu Precup (ICTP and Babeș-Bolyai University, Cluj-Napoca)

Based on the bisection method and on the idea of lower and upper solutions, an approximation algorithm is introduced for solving general control problems. The method is illustrated on a control problem of cell evolution after bone marrow transplantation.

Florin Radu (University of Bergen, Norway)

We will present an improved technique for numerically solving of variational phase-field models of brittle fracture. The most commonly used solver for such models is a staggered scheme, see e.g. [1]. This is known to be robust compared to the monolithic Newton method, however, the staggered scheme often requires many iterations to converge when cracks are evolving. The idea behind the new approach is to use the Anderson acceleration and over-relaxation, switching back and forth depending on the residual evolution, and thereby ensuring a decreasing tendency [2]. Numerical results will be presented to illustrate the performance of the new solver.

References:
[1] MK Brun, T Wick, I Berre, JM Nordbotten, FA Radu, An iterative staggered scheme for phase field brittle fracture propagation with stabilizing parameters, CMAME 2020.
[2] E Storvik, JW Both, JM Sargado, JM Nordbotten, FA Radu, An accelerated staggered scheme for variational phase-field models of brittle fracture, CMAME 2021.

Vicenţiu Rădulescu (University of Craiova and IMAR – Romanian Academy)

In this talk, I shall report on some recent results obtained jointly with N. Papageorgiou, C. Alves and D. Repovs. In the first part, I will discuss an interesting discontinuity property of the spectrum in the case of isotropic double phase equations with Dirichlet boundary condition. Next, I shall consider the anisotropic setting and I will discuss three sufficient conditions for the existence of solutions in the case of double phase with multiple “subcritical-critical-supercritical” regimes.

Adrian Sandu (Virginia Tech, USA)

Computer simulations of evolutionary multiscale multiphysics partial differential equations are important in many areas of science and engineering. Algorithms for time integration of these systems face important challenges. Multiscale problems have components evolving at different rates. No single time step can solve all components efficiently (e.g., when an explicit discretization is used, and the spatial discretization uses both fine and coarse mesh patches). Multiphysics problems are driven by multiple simultaneous processes with different dynamic characteristics. No single time discretization method is best suited to solve all processes (e.g., when some are stiff and others non-stiff). In order to address these challenges, multimethods have been proposed. Multimethods are time integration approaches that use different solution strategies for different subsystems have been developed. For example, different processes are discretized with different numerical schemes, and different components of the system are solved with different time steps. We discuss several general aspects of multimethods for the integration for multiphysics systems, as well as new developments in the field.

Andrei Stan (ICTP, Cluj-Napoca)

We consider a sequence \( x_k \rightarrow x^\ast\in {\mathbb R}\), with errors \( e_k: = |x^\ast – x_k|
\), and having C-order \(p_0>1\):
\[
\lim\limits_{k\rightarrow \infty}\textstyle\frac{e_{k+1}}{e_k
^{p_0}}=Q\in(0,\infty).
\]
When the order \(p_0\) is unknown, it may be approximated by one of the following two expressions
\[
p_0 = \lim\limits_{k\rightarrow \infty} \frac{\ln e_{k+1}}{\ln e_k},
\]
or
\[
p_0 = \lim\limits_{k\rightarrow \infty} \frac{\ln \frac{e_{k+2}}{e_{k+1}}}{\ln
\frac{e_{k+1}}{e_k}}.
\]

In this talk we compare the speed to \(p_0\) of these expressions.

Joint work with E. Cătinaş.

Nicolae Suciu (ICTP, Cluj-Napoca)

Despite the impressive progress that has been made in numerical modelling of reactive transport in natural porous media, there are still technical issues that need to be further investigated. With significant impact in many applications are the numerical diffusion in coupled flow and transport problems, the solution feasibility for linear flow equations with highly heterogeneous coefficients, and the challenges posed by conducting code verification and convergence tests for degenerate Richards equation. These cumbersome issues will be described and commented from the perspective of a generalized random walk solution approach.

Alexandru Tămășan (University of Central Florida, Orlando, Florida, USA)

The range of the X-ray transform of symmetric tensors play an important role in Computerized Tomography (0-order tensors) for data denoising, data completion, or hardware failure and calibration problems. The X-ray for higher order tensors also appears in  Doppler Tomography (1st-order tensors) and Seismology (2nd-order tensors). With lines parametrized as points on the torus, the X-ray data can be understood and a function on the torus.

I will present new necessary and sufficient constraints of the X-ray data in terms of the Fourier coefficients on the pairs of integers lattice. The derivation uses the theory of A-analytic maps.

Gabriel Turinici (Université Paris Dauphine, France)

Motivated by statistical learning applications, the stochastic descent optimization algorithms are widely used today to tackle difficult numerical problems. One of the most known among them, the Stochastic Gradient Descent (SGD), has been extended in various ways resulting in Adam, Nesterov, momentum, etc. After a brief introduction to this framework, we introduce in this talk a new approach, called SGD-G2, which is a high order Runge-Kutta stochastic descent algorithm; the procedure allows for step adaptation in order to strike a optimal balance between convergence speed and stability.

Numerical tests on standard datasets in machine learning are also presented together with further theoretical extensions.

Part of this work is a collaboration with I. Ayadi.