This is the second edition of the NA-NM-AT conference series organized by ICTP (the first edition, NA-NM-AT-2022 was held last year).

During November 6-9, as part of the 2023 edition of the Cluj Academic Days,
Tiberiu Popoviciu Institute of Numerical Analysis (Romanian Academy) will be organizing the online conference

Numerical Analysis, Numerical Modeling, Approximation Theory (NA-NM-AT 2023)

The focus will be on numerical applications in different areas (Numerical Analysis, Numerical Modeling, Scientific Computing).
A special emphasis will be on different practical aspects (programming languages, parallel computing, GPU computing, etc.).
Some numerical or theoretical aspects from Approximation Theory (studied at the Institute, or by some collaborators) will be also presented.

This edition aims to invite Romanian scientists worldwide.
Organizers: Emil Cătinaș and Mihai Nechita.

  • November 6: the book of abstracts has been posted.
  • November 5: the final program (including Zoom links) has been posted (tab above).
  • October 31: the tentative program has been announced.
  • October 25 November 1: deadline for the speakers to send the titles and abstracts.
  • October 1: the list of confirmed speakers so far has been posted online, together with the titles for the talks.
  • August 20: the conference e-mail address nanmat@ictp.acad.ro is working normally after having had some issues.

Final program (drive). Zoom links are included in the program for each day.

 

Confirmed speakers (last update 04.11.2023)

Octavian Agratini (ICTP and Babeș-Bolyai University, Cluj-Napoca)

The talk aims at a generalization of the known Gauss-Weierstrass integral operators introduced two decades ago by Eugeniusz Wachnicki. It is intimately connected by a generalization of the heat equation.

Our new results concern two aspects: obtaining an asymptotic expansion for this generalization transform when applied to a certain class of signals and getting the asymptotic expansion of derivatives of any order of Wachnicki’s operators. All coefficients are explicitly calculated and distinct expressions are provided for analytical functions.

Mihai Anițescu (Argonne National Laboratory, USA)

Many engineering control and optimization problems occupy an extensive area of space and time. These include production cost models in energy or the control of central plants. Model predictive control, one of the favorite approaches for physics-centered control, generally relies on direct numerical solvers, which tend to run out of memory for such problems. Approaches relying on domain decomposition are then key to fit in memory, but theory for such second-order methods tends to be lacking in multicomponent systems.

To address this issue, we prove that certain classes of graph-indexed optimization (GIO) problems exhibit exponential decay of sensitivity with respect to perturbation in the data. GIOs include dynamic optimization (for the linear graph), or distributed, including network control (for the mesh/network-time product graph). This feature allows for very efficient approximation, solutions, or policies based on domain decomposition with overlap relative to centralized or monolithic approaches. In particular, we prove that the proper efficiency metric increases exponentially fast with the overlap size. Immediate consequences of such behavior are that distributed control policies with overlap approach the performance of centralized policies exponentially fast and that Schwarz-type algorithms exhibit, in addition to exceptional parallelism and reduced memory footprint per subproblem, a linear rate of convergence that tends exponentially fast to zero.

Constantin Băcuță (University of Delaware, USA)

We consider  a model convection-diffusion problem  and  present our recent numerical and analysis results regarding mixed finite element formulation and discretization in the singular perturbed case when the convection term dominates the problem. Using  the concepts of optimal norm and saddle point reformulation, we found new error  estimates  for the case of uniform meshes.  We compare  the standard linear Galerkin discretization to a saddle point least square discretization that uses quadratic test functions, and explain the non-physical oscillations of the discrete solutions. We also relate  a known upwinding Petrov-Galerkin method and  the stream-line diffusion discretization method, by emphasizing the resulting  linear  systems and by comparing appropriate error norms.  The results can be extended to the multidimensional case in order to find efficient approximations for more general singular perturbed problems including convection dominated models.

Lori Badea (IMAR – Romanian Academy, Bucharest)

First we introduce and prove the global convergence of a multilevel method for variational inequalities of the first kind. The method is introduced as a subspace correction algorithm in a reflexive Banach space, where general convergence results are derived. This algorithm become a multigrid method by introducing the finite element spaces. In this case, the global convergence rate is written in terms of the number of levels. A direct extension of the previous method to variational inequalities of the second kind is not very evident, but we can introduce for them multilevel methods which are based on this method.
In this talk, we introduce a multilevel algorithm for variational inequalities of the second kind based on the Moreau-Yosida regularization of the non-differentiable term of the inequality. In this way, we get a problem given by a variational inequality of the first kind. We prove that the solution of the regularized problem converges to the solution of the initial problem, and to solve it, we consider the multilevel method already studied.
The numerical experiments showed a very good convergence of the method even for values of the regularization parameter approaching zero.

Radu Bălan (University of Maryland, USA)

In this talk, we discuss Euclidean embeddings of metric spaces induced by a finite subgroup G of the orthogonal group acting on a linear space V. In particular we focus on bi-Lipschitz properties of representations induced by rearrangements of coorbits. This is joint work with Efstratios Tsoukanis (UMD).

Beniamin Bogoșel (École Polytechnique, Palaiseau, France)

Optimization of shape functionals under convexity, diameter or constant width constraints is challenging from a numerical point of view. The support and gauge functions allow a functional characterization of these constraints. Functions describing convex sets are discretized using truncated spectral decompositions or values on a uniform grid. I will present the resulting numerical frameworks, together with various applications from convex geometry and spectral optimization.

I will also discuss a different point of view on constrained optimization problems. The Blaschke-Santalo diagrams can completely characterize all possible inequalities between various quantities, under eventual constraints. The complete theoretical characterization of such diagrams is often difficult to obtain, motivating the interest for developing algorithms allowing their numerical approximation.

Imre Boros (ICTP, Cluj-Napoca)

Stiff ordinary differential equations (ODEs) are challenging to solve numerically, but artificial intelligence (AI) offers a promising new approach. Physics-informed neural networks (PINNs) can be trained to learn the dynamics of a system from both data and physical laws. PINNs have been shown to be effective at solving stiff ODEs with high accuracy and efficiency. This presentation will discuss the potential of AI to aid in solving stiff ODEs, introduce PINNs, and exemplify their application to a stiff ODE.

Renata Bunoiu (University of Lorraine, France)

We present the homogenization of a scalar problem posed in a composite medium made up of two materials, a positive and a negative one. An important feature is the presence of a flux jump across their oscillating interface. The main difficulties of this study are due to the sign-changing coefficients and to the appearance of an unsigned surface integral term in the variational formulation. A proof by contradiction (nonstandard in this context) and T−coercivity technics are used in order to cope with these difficulties.

Daniela Căpățînă (Université de Pau, France)

In this talk, we are interested in locally constructing a conservative flux for conforming finite element solutions of elliptic interface problems with discontinuous coefficients. We employ Nitsche’s method, which imposes the Dirichlet boundary condition weakly.

The construction method is derived based on an auxiliary mixed formulation, with one solution coinciding with the finite element solution and with the other solution being naturally used to define the conservative flux in the Raviart-Thomas space. On the one hand, this approach yields a unified framework for the usual finite element methods (conforming, nonconforming and discontinuous Galerkin) of arbitrary polynomial degree, allowing to establish relations between the different fluxes. On the other hand, contrarily to the existing reconstruction methods, no mixed problem needs to be solved.

One important application of such fluxes is in a posteriori error analysis and adaptive mesh refinement, since they provide simple a posteriori error estimators, which guarantee the reliability bound with the reliability constant equal to one. We then apply the recovered flux to the a posteriori error estimation and prove the robust reliability and efficiency, under the assumption that the diffusion coefficient is quasi-monotone.

We will also discuss the extension to unfitted finite element methods, where the mesh does not fit the interface/boundary. Numerical experiments are provided to illustrate the theoretical results.

Emil Cătinaş (ICTP, Cluj-Napoca)

We analyze the strict superlinear (SSL) convergence order, which is clearly much faster than linear, but not necessarily slower than, say, quadratic.

We show that the set of the SSL sequences can be partitioned in four distinct classes: weak, medium, strong (in increasing speed) and mixed (which cannot be assessed).

The sequences from the medium and weak classes are slower than the sequences with high classical C-orders \(p>1\), while an example shows that certain sequences from the mixed class may be term-by-term faster than some sequences with infinite C-order.

One can evaluate numerically to which class a SSL sequence belongs.

Some recent results of Rodomanov and Nesterov (2022), resp. Ye et al. (2023) show that certain classical quasi-Newton methods (DFP, BFGS and SR1) belong to the weak class.

Iulian Cîmpean (University of Bucharest and IMAR)

We introduce a probabilistic numerical approach for the reconstruction of the unknown boundary data of the steady state heat equation in a bounded domain in R^d, having discrete measurements inside the domain and on a part of the boundary. Numerical experiments are included, and if time allows, we shall provide some theoretical results which reveal that our approach is designed to spectrally approximate the inverse operator that we deal with. This is based on a joint work with A. Grecu and L. Marin.

Radu Cîmpeanu (University of Warwick, UK)

The ability to robustly and efficiently control the dynamics of nonlinear systems lies at the heart of many current technological challenges, ranging from drug delivery systems to ensuring flight safety. Most such scenarios are too complex to tackle directly, and reduced-order modelling is used in order to create viable representations of the target systems. The simplified setting allows for the development of rigorous control-theoretical approaches, but the propagation of their effects back up the hierarchy and into real-world systems remains a significant challenge. Using the canonical setup of a liquid film falling down an inclined plane under the action of active controls, we develop a multi-level modelling framework containing both analytical models and direct numerical simulations acting as a computational platform. Three separate approaches will be described: 1. a simple yet powerful analytically-informed feedback strategy via blowing/suction (Cimpeanu, Gomes and Papageorgiou, Nonlinear Dynamics, 2021), 2. a linear-quadratic regulation optimal control methodology (Holroyd, Cimpeanu and Gomes, arXiv:2301.11379, 2023), and 3. a model-predictive control loop using electrostatic effects aimed at dripping prevention (Wray, Cimpeanu and Gomes, Physical Review Fluids, 2022). In all cases, the extended ranges of applicability of the hybrid mechanisms, as well as the detailed effects of the controls in terms of stability and treatment of nonlinearity are examined in detail. This helps us gain a systematic understanding of the information transfer inside the flows, ultimately informing next-generation data-driven interfacial control strategies.

Co-authors: Susana Gomes (Warwick), Oscar Holroyd (Warwick), Alexander Wray (Strathclyde)

Nicolae Cîndea (University of Clermont Auvergne, France)

The aim of this talk is to present a model, and the associated numerical simulations, for the inertial migration of particles in a fluid flowing in three-dimensional channels. The main idea of the proposed method is to reduce the initial problem coupling Navier-Stokes equation to the equation describing the displacement of a spherical particle immersed in the fluid by a first-order expansion with respect to the Reynolds number. In this way, the computation of the velocity of the spherical particle is reduced to solving several Stokes problems. The results making the object of this presentation were obtained in collaboration with Laurent Chupin.

Adrian Constantin (University of Vienna, Austria)

We will discuss stratospheric planetary flows in the atmosphere of the outer planets of our solar system, modelled by stationary solutions of Euler’s equation on a rotating sphere.
We present existence results as well as some rigidity results (ensuring that the solutions are either zonal or rotated zonal solutions), and discuss the stability of Rossby–Haurwitz waves.

Maria Crăciun (ICTP, Cluj-Napoca)

In this talk we will present the fitting of the theoretical predictions of the rotation curves in conformal gravity with data for several galaxies included in the SPARC database by using the Multi Start and Global Search methods. In the total expression of the tangential velocity we also include the effects of the baryonic matter, and  the mass to luminosity ratio. The preliminary findings suggest that the solution offered by Weyl geometric gravity effectively explains a significant portion of the rotation curves within the SPARC sample. (joint work with Tiberiu Harko)

Dan Crișan (Imperial College London, UK)

Stochastic partial differential equations are widely used to model the evolution of uncertain dynamical systems in geophysical fluid dynamics. For a judicious modelling of the evolution of a fluid flow, the noise term needs to be properly calibrated. Lagrangian methods have been developed in this sense, where particle trajectories are simulated starting from each point on both the physical grid and its refined version, then the differences between the particle positions are used to calibrate the noise. This is computationally expensive and not fully justified from a theoretical perspective. We propose an Eulerian alternative which can be applied to a general class of stochastic models. In Part I we will introduce the theoretical justification for this new calibration method, while in Part II we will present the calibration results for a stochastic rotating shallow water model.

This is joint work with Alexander Lobbe (Imperial College London) and the results are being published in “Noise calibration for SPDEs: a case study for the rotating shallow water model” (Foundations of Data Science), arXiv:2305.03548.

Dacian Dăescu (Portland State University, USA)

Data assimilation systems (DAS) for numerical weather prediction (NWP) combine information from a numerical model, observational data, and error statistics to analyze and predict the state of the atmosphere. Variational methods (3D-Var, 4D-Var) produce an estimate (analysis) to the true state by solving a large-scale optimization problem. The rapid growth in the data volume provided by satellite-based instruments has prompted research to assess and improve the forecast impact (”value”) of high-resolution observations.
This talk presents theoretical and practical aspects of the sensitivity analysis in a 4D-Var DAS including evaluation of the forecast sensitivity to observations (FSO), prior state estimate, and parameterized error covariance models. An FSO-based optimization approach is formulated to identify deficiencies in the weight assigned to various observing system components and adaptively improve the use of observations.
The practical ability to implement this methodology is demonstrated in a computational environment that features all elements necessary for applications to NWP.

Andrei Drăgănescu (University of Maryland Baltimore County, USA)

The aim of this research is to develop efficient multigrid preconditioners for a classic linear-quadratic optimization problem constrained by an elliptic equation with stochastic coefficients. Using a discretize-then-optimize approach, in previous work we have shown that strategies inherited from the associated deterministic optimal control problem extend to the stochastic version when a stochastic Galerkin discretization is employed. In this talk we show that similar strategies succeed when discretizing the elliptic equation using a sparse grid stochastic collocation approach.
Authors: Sumaya Alzuhairy, Andrei Draganescu and Bedrich Sousedik.

Ionuț Farcaș (University of Texas at Austin, USA)

In many fields of science, comprehensive and realistic computational models are available nowadays. Often, the respective numerical calculations call for the use of powerful supercomputers, and therefore only a limited number of cases can be investigated explicitly. This prevents straightforward approaches to important tasks like uncertainty quantification and sensitivity analysis. This challenge can be overcome via our recently developed sensitivity-driven dimension-adaptive sparse grid interpolation strategy. The method exploits, via adaptivity, the structure of the underlying model (such as lower intrinsic dimensionality and anisotropic coupling of the uncertain inputs) to enable efficient and accurate uncertainty quantification and sensitivity analysis at scale. Here, we demonstrate the efficiency of this adaptive approach in the context of fusion research.
In a realistic, computationally expensive scenario of turbulent transport in a magnetic confinement tokamak device with more than 264 million spatial degrees-of-freedom and eight uncertain parameters, for which a single simulation requires more than 8,100 CPU hours on 16 nodes on a supercomputer, we reduce the effort by at least two orders of magnitude.
In addition, we show that this refinement method intrinsically provides an accurate surrogate model that is nine orders of magnitude cheaper than the high-fidelity model.

Silviu Filip (INRIA Rennes, France)

Large scale deep neural networks (DNNs) have achieved remarkable success in many application scenarios. However, high computational complexity and energy costs of modern DNNs make their deployment challenging. Model quantization is a common approach to deal with deployment constraints, but searching for optimized bit-widths can be challenging. In this talk I will present a learning-based method that automatically optimizes weight and activation signal bit-widths during training for more efficient DNN inference.
Compared to other methods that are generally designed to be run on a pretrained network, our approach works well in both training from scratch and fine-tuning scenarios.

Călin-Ioan Gheorghiu (ICTP, Cluj-Napoca)

The most important example of the pseudo parabolic problem we deal with is a Cauchy one attached to the Camassa-Holm equation formulated on an unbounded domain. To remove the inconveniences related to the domain not being bounded, we will use the collocation based on sinc functions. These make it possible to introduce the behavior of the solution to infinity. To simulate the evolution in time, various schemes with finite differences are used.
The treated problem has the property of being bi-Hamiltonian and the methods used failed to satisfactorily conserve these invariants.  We will focus our future efforts in this direction.
The solutions already obtained are presented and commented on the accuracy.

Sever Hîrștoagă (INRIA Paris, France)

In this talk I present the derivation of a first-order reduced model to approximate the solution of a stiff ordinary differential equation. This is based on asymptotic expansions with two scales. Such a model is free of high oscillations in time and thereby has a low computational cost. Then, I propose a volume-preserving method to numerically solve the problem. Some numerical results are provided to underline the properties of the reduced model.

Traian Iliescu (Virginia Tech, USA)

Over the past decade, several stabilization strategies have been developed to tackle the reduced order model (ROM) inaccuracy in the convection-dominated, under-resolved regime, i.e., when the number of degrees of freedom is too small to capture the complex underlying dynamics.  In this talk, I will survey regularized reduced order models (Reg-ROMs), which are simple, modular stabilizations that employ ROM spatial filtering of various terms in the Navier-Stokes equations (NSE) to alleviate the spurious numerical oscillations generally produced by standard ROMs in the convection-dominated, under-resolved regime.  I will focus on two different types of Reg-ROM strategies: (i) the evolve-filter-relax ROM (EFR-ROM), which first filters an intermediate velocity approximation, and then relaxes it; and (ii) the Leray-ROM (L-ROM), which filters the convective term in the NSE.  Throughout my talk, I will highlight the impact made by ROM spatial filtering on the Reg-ROM development.  Specifically, I will talk about the two main types of ROM spatial filters: (i) the ROM differential filter; and (ii) the ROM projection.  I will also propose two novel higher-order ROM differential filters.  An important role played in ROM spatial filters and Reg-ROMs is the ROM lengthscale.

In my talk, I will put forth a novel ROM lengthscale, which is constructed by leveraging energy balancing arguments.  I emphasize that this novel energy-based lengthscale is fundamentally different from the standard ROM lengthscale introduced decades ago, which is based on simple dimensional arguments.  Finally, I will illustrate the success achieved by ROM spatial filters and Reg-ROMs in under-resolved numerical simulations of the turbulent channel flow.

Throughout my talk, I will discuss numerical analysis results proved for the Reg-ROMs that we propose, including fundamental properties, e.g., stability, convergence, and parameter scalings. I will also present some of the challenges and open questions in the development of rigorous numerical analysis foundations for ROM closures and stabilizations.

Stelian Ion (ISMMA – Romanian Academy, Bucharest)

Surface reconstruction from scattered data is a 2D interpolation problem. There are many theoretical results and more many numerical algorithms related to the construction of a surface from sampled points. If the measurements are affected by random errors, the constructed surface exhibits oscillations and the numerical algorithms must cure the errors to produce a smooth surface. If one is only interested in the shape of the surface, then the penalized least squares method generally furnishes good results. The problem becomes more difficult if one wishes to generate surfaces with smooth curvatures. In this talk we will analyze a class of interpolation methods able to solve such problem.

Joint work with Doerin Marinescu and Stefan Cruceanu.

Mioara Joldes (CNRS, France)

In computer-aided mathematical proofs, a basic, yet critical, building block is the problem of actually obtaining numerical values. In practice, one strives to achieve precise and/or guaranteed results without compromising efficiency. We briefly introduce basic tools like interval arithmetic and then focus on effectively computing polynomial approximations (expressed as truncated Taylor, Chebyshev or trigonometric series) together with validated error bounds. In particular, we discuss about a posteriori validation with Newton-like operators. Finally, all these techniques are applied to the efficient finite precision evaluation of numerical functions. We provide examples ranging from robust space mission design to computer-aided proofs in dynamical systems.

Oana Lang (Imperial College London, UK)

Stochastic partial differential equations are widely used to model the evolution of uncertain dynamical systems in geophysical fluid dynamics. For a judicious modelling of the evolution of a fluid flow, the noise term needs to be properly calibrated. Lagrangian methods have been developed in this sense, where particle trajectories are simulated starting from each point on both the physical grid and its refined version, then the differences between the particle positions are used to calibrate the noise. This is computationally expensive and not fully justified from a theoretical perspective. We propose an Eulerian alternative which can be applied to a general class of stochastic models. In Part I we will introduce the theoretical justification for this new calibration method, while in Part II we will present the calibration results for a stochastic rotating shallow water model.

This is joint work with Alexander Lobbe (Imperial College London) and the results are being published in “Noise calibration for SPDEs: a case study for the rotating shallow water model” (Foundations of Data Science), arXiv:2305.03548.

Cătălin Lefter (Al.I. Cuza University and Octav Mayer Institute, Iași)


We consider systems of reaction-diffusion  equations in annular domains in \(\mathbb{R}^d\), coupled in zero  order terms and with general homogeneous mixed boundary conditions. We establish Lipschitz estimates in \(L^2\)  for the source in terms of observations on the solution or on its normal derivative on a connected component of the boundary. The main tools are appropriate Carleman estimate in \(L^2\)-norm for nonhomogeneous parabolic systems with boundary observations and some strong maximum type principles and invariance results for parabolic equations and systems.

Work in collaboration with Elena-Alexandra Melnig.

Daniel Lesnic (University of Leeds, UK)

The heat transfer coefficient (HTC) characterises the contribution that an interface makes to the overall thermal resistance to the system and is defined in terms of the heat flux across the surface for a unit temperature gradient. It is well-known that the HTC is an important value to determine in heat transfer. In this talk, the determinations of the time-dependent, space-dependent or temperature-dependent heat transfer coefficient in transient heat conduction from standard or non-standard measurement are investigated. For these inverse nonlinear ill-posed problems the uniqueness of the solution hold. Numerical results obtained using the boundary element method with regularization are presented and discussed.

(joint work with T. Onyango and M. Slodicka)

Daniela Lupu (Politehnica University, Bucharest)


This paper presents a stochastic higher-order algorithmic framework for minimizing finite-sum (possibly nonconvex and nonsmooth) optimization problems.  Our  framework is based on stochastic higher-order upper bound approximations of the  (non)convex and/or (non)smooth finite-sum objective function, leading to a   minibatch stochastic  higher-order majorization-minimization algorithm, which we call SHOM. We derive convergence guarantees for the SHOM algorithm for  general  optimization problems when the upper bounds approximate the finite-sum objective function  up to an error that is \(p \geq 1\) times  differentiable and has a Lipschitz continuous \(p\) derivative; we call such upper bounds stochastic higher-order surrogate functions.   The main challenge in analyzing convergence of SHOM, especially in the nonconvex setting,  is the fact that even in expectation the cost cannot be used as a Lyapunov function, unless a proper secondary sequence is constructed from the algorithm. For general nonconvex problems we prove that SHOM is a descent method {in expectation}, derive asymptotic stationary point guarantees and under Kurdyka-Lojasiewicz (KL) property  we  establish the first {local} linear or sublinear  convergence rates (depending on the KL parameter)  for stochastic higher-order type algorithms under such an assumption. For convex problems we derive local superlinear or linear  convergence results, provided that  the   objective function is uniformly convex.

Andra Malina (ICTP, Cluj-Napoca)

The main aim of this talk is to present a method for reconstructing damaged black and white or color images, considering both a global and a local approach. The method is based on the bivariate Shepard operator combined with the inverse quadratic and the inverse multiquadric radial basis functions. We evaluate the results by studying the approximation errors.

Liviu Marin (University of Bucharest and ISMMA – Romanian Academy)

We investigate, from both the theoretical and the numerical viewpoints, the acceleration of the iterative algorithms of Kozlov et al. (Comput. Math. Math. Phys. 31 45-52, 1991) for the accurate, convergent and stable reconstruction of the missing data (both the temperature and the normal heat flux) on an inaccessible boundary of the domain occupied by a solid from the knowledge of Cauchy data on the remaining and accessible boundary in the framework of stationary anisotropic heat conduction without heat sources. For each of the two algorithms with relaxation considered, this inverse Cauchy problem in anisotropic heat conduction with exact data is transformed into an equivalent fixed-point problem for an associated operator that is defined on and takes values in a suitable function space, and accounts for the relaxation parameter. Consequently, the convergence of each relaxation algorithm reduces to investigating the properties of the corresponding operator. This enables one to determine the admissible range for the relaxation parameter along with a criterion for selecting its optimal value at each iteration for each iterative algorithm and exact Cauchy data. The numerical implementation is realised for 2D and 3D homogeneous anisotropic solids via the meshless method of fundamental solutions and confirms a significant reduction in the number of iterations and hence the CPU time required for the two relaxation algorithms proposed to achieve convergence, provided that the dynamical selection of the optimal value for the relaxation parameter is employed.

Sorin Mitran (Univ. North Carolina at Chapel Hill)


Biological tissue exhibits significant variability in mechanical properties that is usually attributed to random microscopic realizations. A computational approach to nonlinear mechanics of tissue with an underlying stochastic component is introduced. At the bulk continuum level, a hyperelastic formulation is combined with linear viscous models to provide the mean stress and strain fields for microscale constituent elements of the tissue. The hyperelastic behavior is expressed as a system of conservation laws, \(\boldsymbol{q}_{, t} + \nabla_{\boldsymbol{a}} \boldsymbol{f}(\boldsymbol{q}) =\boldsymbol{\psi} (\boldsymbol{q}, \boldsymbol{\xi})\), for the velocity and deformation gradient \(\boldsymbol{q}= (\boldsymbol{v}, \boldsymbol{F})\) in a Lagrangian framework. The flux \(\boldsymbol{f}= (-\boldsymbol{P}(\boldsymbol{F}),-\boldsymbol{Z} (\boldsymbol{v}))\) requires definition of the dependence of the first Piola-Kirhhoff tensor \(\boldsymbol{P}\) in terms of the deformation gradient \(\boldsymbol{F}\). Viscous effects are captured through the sink term \(\boldsymbol{\psi}\), that incorporates history dependence through the auxilliary variables \(\boldsymbol{\xi}\).

Constitutive laws of known analytical form utilize model parameters \(\boldsymbol{y} (\boldsymbol{\omega})\) that are assumed here to be random functions of the microstate \(\omega\), and can be determined by carrying out integration over all possible microstates \(\boldsymbol{P} (\boldsymbol{F}) = \int_{\Omega} \boldsymbol{G} (\boldsymbol{F}, \omega) p (\omega) d \omega\), where \(\boldsymbol{G}\) is the generating function for \(\boldsymbol{P}\). The probability density function (PDF) \(p (\omega)\) is extracted from microscopic simulations carried out under known mean fields; Brownian Dynamics and Markov Chain Monte Carlo methods are considered. Probability distribution functions characterizing the microscale configuration are organized as a statistical manifold \(\mathcal{M}= \{ p (\omega ; \lambda)\}\), where \(\lambda\) is a parametrization of the manifold. Singular value decomposition of data from successive microscopic states defines the localcurvature of \(\mathcal{M}\), and computational geometry provides a local reconstruction of the manifold. Low-parameter PDFs are obtained by geodesic transport on \(\mathcal{M}\) applying concepts of information geometry. The construction is updated during time evolution, forming a macro-microscale interaction loop. The procedure is applied to propagation of shear waves in the brain, using realistic data obtained from histology and compared to ultrasonic shear wave imaging.

Cornel Murea (Université de Haute Alsace, France)

The recent implicit parametrization theorem, based on simple Hamiltonian systems, allows the description of domains and their boundaries and, consequently, it provides a general fixed domain approximation method in shape optimization problems, using optimal control theory. An important new ingredient in the arguments is the differentiability of the period for the Hamiltonian systems, with respect to functional variations. Due to the differentiability properties, we can use descent algorithms of gradient type. In the experiments, our approach can modify the topology both by closing holes or by creating new holes. We underline the applicability of this new methodology to large classes of shape optimization problems.

Joint work with Dan Tiba,  Institute of Mathematics (Romanian Academy), Bucharest

Adrian Muntean (Karlstads Universitet, Sweden)

We study a nonlinearly coupled parabolic system with non-local drift terms modeling at the continuum level the inter-species interaction within a ternary mixture that allows the evaporation of one of the species. In the absence of evaporation, the proposed evolution system coincides with the hydrodynamic limit of a stochastically interacting particle system of Blume-Capel-type driven by the Kawasaki dynamics. We investigate the well-posedness of the target system posed in 3D and present preliminary numerical simulation results for a 2D scenario. We employ an approximation scheme based on finite differences to illustrate the effect of changing the characteristic time scale of the evaporation rate on the shape and connectivity of the evolving-in-time morphologies. The precise structure of our evolution system is motivated by technological issues involved in the design of organic solar cells, however, similar structures of model equations arise in other materials science-related contexts that are conceptually related (e.g. in the design of the internal structure of thin adhesive bands). This is joint work with Rainey Lyons (Karlstad), Stela Andrea Muntean (Karlstad), and Emilio N. M. Cirillo (Rome).

Mihai Nechita (ICTP and Babeș-Bolyai University, Cluj-Napoca)

We consider the unique continuation problem for the Helmholtz equation and study its numerical approximation with two methods: high-order conforming FEM and physics-informed neural networks (PINNs).
For conforming FEM, regularization is added on the discrete level using gradient jump penalty, Galerkin least squares and a scaled Tikhonov term. The method is shown to converge in terms of the stability of the problem and the polynomial degree of approximation.
For PINNs, one can use the conditional stability of the problem to bound the generalization error.
We present numerical experiments in 2d for different frequencies and for geometric configurations with different stability bounds.

Maria Neuss-Radu (Friedrich Alexander Universitaet, Erlangen-Nuernberg, Germany)


We consider a nonlinear drift-diffusion system for multiple charged species in a porous medium in 2D and 3D with periodic microstructure. The system consists of a transport equation for the concentration of the species and Poisson’s equation for the electric potential. The diffusion terms depend nonlinearly on the concentrations. We consider non-homogeneous Neumann boundary condition for the electric potential. The aim is the rigorous derivation of an effective (homogenized) model in the limit when the scale parameter \(\epsilon\) tends to zero. This is based on uniform a priori estimates for the solutions of the microscopic model. The crucial result is the uniform \(L^\infty\)-estimate for the concentration in space and time. This result exploits the fact that the system admits a nonnegative energy functional which decreases in time along the solutions of the system. By using weak and strong (two-scale) convergence properties of the microscopic solutions, effective models are derived in the limit \(\epsilon \to 0\) for different scalings of the microscopic model.
This is a joint work with Apratim Bhattacharya (Umea) and Markus Gahn (Heidelberg).

Silviu Niculescu (CNRS France)

Pole placement is a well-known and classical method for controlling finite-dimensional linear time-invariant (LTI) systems. Roughly speaking, under appropriate controllability conditions, the method consists in assigning poles of the closed-loop system to specified locations by an appropriate choice of the controller gains guaranteeing the exponential stability in closed-loop with a prescribed decay rate of the corresponding system’s solution. The construction makes use of the characteristic polynomial degree and the structural properties (controllability) of the system. Its extension for infinite-dimensional linear systems is far from being trivial. In the case of dynamical systems governed by retarded and/or neutral delay-differential equations (DDEs), there exist only two effective extensions – continuous pole placement and partial pole placement.

This talk provides some insights into the so-called partial pole placement method. By exploiting the notion of degree of a quasipolynomial, and by assigning a real root with a maximal admissible multiplicity, we try to answer to a simple question: Does such an assignment (of real roots) guarantee the closed-loop stability of the closed-loop system? In other words, we are interested to explore the situation when the assigned (multiple real) root «govern » the location of the remaining (infinitely many?!) roots of the characteristic function. More precisely, in some cases, we can prove that such a multiple root is « dominant » and explicitly defines the spectral abscissa of the dynamical system. Such a property, called « multiplicity-induced-dominancy » (MID), has no a « natural » (non-trivial) equivalent in finite-dimension and it is specific to DDEs. To perform such an analysis we exploit the analytical properties of Kummer/Whittaker confluent hypergeometric functions. Finally, such ideas open interesting perspectives in interpreting Padé approximations.

(joint work with Islam BOUSSAADA & Guilherme MAZANTI)

Darian Onchis (West University of Timișoara)

Incremental learning refers to a class of machine learning models in which the learning process goes through a continuous model adaptation based on a constantly arriving data stream. This type of learning is needed when access to past data is limited or impossible, but is affected by catastrophic forgetting. This phenomenon consists in a drastic performance drop for previously learned information when ingesting new data. We tackle class-incremental learning without memory by adapting prediction bias correction, a method which makes predictions of past and new classes more comparable. In the same context, we also propose a method to control the dark knowledge values also known as soft targets, with the purpose of improving the training by transfer of knowledge.

Diana Otrocol (ICTP and Technical University of Cluj-Napoca)

In this paper, we prove that, if \(\inf \limits_{x\in A}|f(x)|=m>0\), then the
partial differential operator \(D\) defined by \(D(u)= {\textstyle \sum \limits_{k=1}^{n}} f_{k}\frac{\partial u}{\partial x_{k}}-fu\), where \(f,f_{i}\in C(A,\mathbb{R}),u\in C^{1}(A,X),\ i=1,\ldots,n,I\subset
\mathbb{R}\) is an interval, \(A=I\times \mathbb{R}^{n-1}\) and \(X\) is a Banach space, is Ulam stable with the Ulam constant \(K=\frac{1}{m}\). Moreover, if \(\inf \limits_{x\in A}|f(x)|=0\), we prove that \(D\) is not generally Ulam stable.

Joint work with Adela Novac and Dorian Popa.

References

[1] S. M. Jung, Hyers-Ulam stability of linear partial differential equations of first order. Appl. Math. Lett. 2009, 22, 70-74.

[2] N. Lungu, D. Popa, Hyers-Ulam stability of a first order partial differential equation. J. Math. Anal. Appl.  2012, 385, 86–91.

[3] D. Popa, I. Rasa, Hyers-Ulam stability of the Laplace operator. Fixed Point Theory, 2018, 19(1), 379-382.

Noemi Petra (University of California Merced, USA)

Model-based projections of complex systems will play a central role in prediction and decision-making, e.g., anticipate ice sheet contribution to sea level rise. However, models are typically subject to uncertainties stemming from uncertain inputs in the model such as coefficient fields, source terms, etc. Refining our understanding of such models requires solving a (Bayesian) inverse problem that integrates measurement data and the model to estimate and quantify the uncertainties in the parameters. Such problems face a number of challenges, e.g., high-dimensionality of the inversion parameters, expensive-to-evaluate models, and model uncertainty additional to the uncertainty in inversion parameters. In this talk, I will focus on identifying and exploiting low-dimensional structure in such inverse problems stemming for example from local sensitivity of the data with respect to parameters, diffusive models, sparse data, etc. The key idea is to look at the Hessian operator arising in these inverse problems, which often exhibits a low rank and even a lower off-diagonal structure, and build approximations of this operator. This approximation can then be used as a preconditioner for the Newton-Krylov systems arising when solving the inverse problem, or to build more-informative proposals in an MCMC context. Combining an efficient matrix-free point spread function (PSF) method for approximating operators with fast hierarchical (H) matrix methods, we show that Hessian information can be obtained using only a small number of operator applications.

Co-authors: Noemi Petra (UC Merced), Nick Alger (UT Austin), Tucker Hartland (LLNL) and Omar Ghattas (UT Austin)

Stefania Petra (Heidelberg University, Germany)

In this talk I will present a geometric multilevel optimization approach choosing as case study a regularised inverse problem. In particular, the approach is motivated by variational models that arise as the discretization of some underlying infinite dimensional problem. Such problems naturally lead to a hierarchy of discretized models. We employ multilevel optimization to take advantage of this hierarchy: while working at the fine level we compute the search direction based on a coarse model. By utilising concepts of information geometry in our formulation, we propose a smoothing operator that only uses first-order information and incorporates constraints smoothly. We show that the proposed algorithm is well suited for ill-posed reconstruction problems and demonstrate its efficiency on several large-scale examples.

Sorin Pop (Hasselt University, Belgium)

We consider a nonlinear, possibly degenerated parabolic equation, modelling e.g. unsaturated flow in a porous medium, or biofilm growth. Implicit time stepping methods are popular due to their stability, allowing to avoid severe restrictions on the time step. This leads to nonlinear, time-discrete elliptic equations, for which linear iterative schemes are needed for approximating the solution. Due to the degeneracy, the Newton scheme may fail to converge, unless very small time steps are used, possibly after applying a regularisation step.

In this talk, we present an iterative scheme that does not require any regularization, for which the linear convergence can be proved rigorously under a mild restriction (if any) on the time step. This convergence is obtained at the level of the elliptic problem, so it is not restricted to any spatial discretisation or mesh. Moreover, it features an improved the convergence behavior, being linearly convergent in cases when the Newton diverges, and requiring at most a comparable number of iterations whenever the Newton scheme does converge.

Doru Popovici (Lawrence Berkeley National Laboratory, USA)

Modern supercomputers vary in compute power and network capabilities. For example, Summit and Frontier make use of GPUs for accelerating computation, while relying on either a fat-tree or dragonfly topology for transferring data between the nodes. On the other hand, Fugaku has thousands of CPUs and uses a six-dimensional torus for communication. In this work, we want to study these differences in the context of scaling eigenvalue solvers used in planewave DFT calculations. More specifically, we will focus on the components that appear in nonlinear eigenvalue solvers like Conjugate Gradient, Jacobi Davidson and Unconstrained. We will look at the distributed Fourier computation and some of the linear algebra operations. We will provide an in-depth analysis and offer solutions for scaling the computation to a large number of GPUs. We will emphasize that systematic approaches can be derived to guide the parallelization such that the computation can effectively use the compute and network resources.

Radu Precup (ICTP and Babeș-Bolyai University, Cluj-Napoca)


We discuss the localization of velocity for a problem of the type
\begin{equation}
\left\{
\begin{array}{l}
-\text{div }\left( A\left( x\right) \nabla u\right) +\eta _{0}\left(
x\right) u+\kappa _{0}\left( x\right) \left( u\cdot \nabla \right) u+\nabla
p=\Phi \left( x,u\right) \ \ \ \text{in }\Omega \\
\text{div }u=0\ \ \ \text{in }\Omega \\
u=0\ \ \text{on }\partial \Omega ,
\end{array}%
\right.
\end{equation}
where \(\Phi \) is a reaction term dependent on velocity. First we obtain the
localization of the enstrophy, namely \(\ r\leq \left\vert u\right\vert
_{H_{0}^{1}\left( \Omega \right) }\leq R,\ \)and then, the localization of the kinetic energy, that is \(\ r\leq \left\vert u\right\vert _{L^{2}\left(
\Omega \right) }\leq R.\) The bounds \(r\) and \(R\) are estimated in terms of the reaction force \(\Phi\) and of system coefficients. The proofs are based on the fixed point formulation of the problem and on the fixed point index.
The results come from a joint work with Mirela Kohr, in continuation of the paper: M. Kohr and R. Precup. Analysis of Navier-Stokes models for flows in bidisperse porous media. J. Math. Fluid Mech. (2023)25:38.

Adrian Sandu (Virginia Tech, USA)

Simulation of complex systems, composed of multiple subsystems, is often done by resolving individual susystems, and approximating their interaction with the other. This approach is called co-simulation. In this talk we will review popular co-simulation techniques, and latest developments in the field.

Mircea Sofonea (Université de Perpignan, France)

We deal with a class of elliptic quasivariational  inequalities with constraints in a reflexive Banach space. We use arguments of monotonicity, convexity and compactness in order to prove a convergence criterion for such inequalities. This criterion allows us to consider a new well-posedness concept in the study of the corresponding inequalities, which extends the classical Tykhonov and Levitin-Polyak well-posedness concepts used in the literature. Then, we introduce a new penalty method, for which we provide a convergence result. Finally, we consider a variational inequality which describes the equilibrium of a spring-rods system, under the action of external forces.
We apply our abstract results in the study of this inequality and provide the corresponding mechanical interpretations. We also present numerical simulations which  validate our convergence results.

Andrei Stan (ICTP, Cluj-Napoca)

Using fixed points arguments together with critical point techniques we obtain a hybrid existence result for a system with multiple operator equations, where only a part of them admit a variational structure.
The part of the solution of the equations which admit a variational structure are a Nash-type equilibrium for the corresponding energy functionals, and moreover they are localized in a bounded convex conical set defined by a norm and a concave upper semicontinuous functional.
This is achieved by an hybrid iterative scheme, partially based on Minty-Browder theorem and partially based on Ekeland variational principle. An abstract example of a system of differential equations with Dirichlet boundary conditions is given.

Jacob Stokke (University of Bergen, Norway)

The Richards’ equation is modelling saturated/unsaturated flow porous media. It is a highly nonlinear, degenerate elliptic-parabolic equation, which is known to be very challenging to solve. In this talk we present a new solver for Richards´equation based on an adaptive switching between Newton and L-scheme. The theoretical properties of the scheme will be discussed. Illustrative numerical examples will be shown.

Nicolae Suciu (ICTP, Cluj-Napoca)

We prove that discrete systems of particles with piece-wise analytical trajectories can be described macroscopically by almost everywhere continuous fields. The transition from the microscopic to the macroscopic description is achieved through space-time averages on d-dimensional cubes and symmetric time intervals. The fields obtained in this way verify non-closed balance equations that are further used to verify the averaging procedure by estimating the intrinsic diffusion coefficients. The approach is applied to space-time upscaling of the reactive transport in variably saturated porous media. Processes at the microscopic level are modeled by as many random walkers as molecules involved in chemical reactions. The space-time scales of the averaging procedure may be chosen as representative for the macroscopic observations and the experimental measurements.

Alexandru Tămășan (University of Central Florida, Orlando, Florida, USA)

I will present a recent reconstruction of the full planar vector fields from its Doppler and its first moment Doppler transform data in fan beam coordinates. The method of proof is constructive  and is based on the theory of A-analytic maps.

Cătălin Trenchea (University of Pittsburgh, USA)

We propose and analyze a second-order partitioned time-stepping method for a two phase flow problem in porous media. The algorithm is based on a refactorization of Cauchy’s one-legged θ-method.

The main part consists of the implicit backward Euler method, while part two uses a linear extrapolation. In the backward Euler step, the decoupled equations are solved iteratively. We prove that the iterations converge linearly to the solution of the coupled problem, under some conditions on the data. When θ=1/2, the algorithm is equivalent to the symplectic midpoint method. Similar to the continuous case, we also prove a discrete Gibbs free energy balance, without numerical dissipation. We compare this midpoint method with the classic backward Euler method, and two implicit-explicit time-lagging schemes. The midpoint method outperforms the other schemes in terms of rates of convergence, long-time behaviour and energy approximation, for small and large values of the time step.

Cătălin Turc (New Jersey Institute of Technology, USA)

We present high-order Convolution Quadratures (CQ) methods for the solution of the wave equation in unbounded domains in two and three dimensions that rely on Nyström discretizations for the Boundary Integral Equation formulations of the ensemble of associated Laplace domain modified Helmholtz problems. Both Dirichlet and Neumann boundary conditions, imposed on open-arc/open surfaces as well  as Lipschitz closed scatterers, are considered. A variety of accuracy tests are presented that showcase the high-order in time convergence (up to and including fifth order) that the Nyström CQ discretizations are capable of delivering and we compare to numerical results in the literature pertaining to time-domain multiple scattering problems solved with other methods.

Marius Tucsnak (Université de Bordeaux, France)

We consider infinite dimensional time invariant linear systems which are exactly controllable in every positive time. We focus on the time optimal control problem. After showing the existence of solutions we show that they satisfy a maximum principle. We then conclude that under a particular approximate type controllability assumption, these controls are bang-bang. We finally explain how these abstract results apply to systems described by the Schrodinger and plate equations.

Gabriel Turinici (Université Paris Dauphine, France)

Reinforcement learning techniques are becoming to be used in mathematical finance ; we focus in this talk on portfolio allocation tools that extend or complement classical approaches such as Markowitz mean-variance. After briefly pointing to existing references, we present some algorithms belonging to the general class of policy gradients techniques and discuss their implementation.