Symposium on Sparsity and Singular Structures 2024

Plenary Speakers

Balázs Kovács

University of Paderborn

Error Analysis for the Numerical Approximation of Dissipative Harmonic Flows: Higher-Order Methods - Nodal Constraints  |  In this talk we will present some results on optimal-order error estimates for time and space discretisations for dissipative harmonic flows, in particular for the Landau--Lifshitz--Gilbert equation and the harmonic map heat flow. The numerical methods use non-conforming finite element space discretisations, based on a weak formulation due to Alouges, combined with linearly implicit backward difference formula time discretisations up to order 5. The talk will focus on two important aspects: Using an approximate tangent spaces that are defined by L2-averaged orthogonality constraints allows us to show high-order fully discrete optimal-order error estimates. In particular, we will also discuss stability estimates for high-order linearly implicit backward difference approximations. Using instead a scheme with the arguably simpler nodal orthogonality constraints, also allows for optimal-order error estimates but only of low-order. We will present numerical experiments illustrating an complementing our theoretical results. The talk is based on joint works with G. Akrivis (Ioannina), M. Feischl (Vienna), & Ch. Lubich (Tübingen), and with S. Bartels & Z. Wang (Freiburg).

Christoph Ortner

University of British Columbia

Efficient Parameterization of Many-body Interaction  |  The integration of machine learning (ML) into the traditional modeling and simulation workflows over the past few years has opened up the possibility to remove decade-old approximations (in, e.g., constitutive laws) leading to new generations of models and numerical methods that far outstrip their predecessors in terms of accuracy and transferability. The best results are often achieved when these new methods judiciously integrate existing domain knowledge, but this must be done with care in order to avoid introducing biases. I will discuss how these general principles are applied in the context of parameterizing interaction laws for particle systems, focussing specifically on the role of model "architecture". Concretely, I will explain the atomic cluster expansion (ACE) (Drautz, 2019) and some of its descendants, which provide a systematic, efficient, and interpretable parameterisation of many-body interaction in particle systems. It can be thought of as a general method to enlarge the design space of equivariant neural networks. ACE is well-suited for parameterising surrogate models of particle systems where it is important to incorporate symmetries and geometric priors into models without sacrificing systematic improvability. The most successful application so far is “learning” interatomic potentials (or, force fields) but the applicability is much broader: for example, we have adapted ACE to parameterizing the Hamiltonian for electronic structure calculators, wave functions for many-body quantum chemistry, and for jet tagging in particle physics. In the talk I will explain the mathematical framework that enables this breadth of applications, highlight some some preliminary rigorous results, and point out a number of theoretical and practical questions and challenges. Although my talk is arguably about machine-learning, I will use mostly ideas and language from mathematical modelling and numerical analysis.

Mark Peletier

TU Eindhoven

Singular-Limit Analysis of Training with Noise Injection  |  Many training algorithms inject some form of noise in the training. The classical example is the mini-batch noise in Stochastic Gradient Descent, but other examples are dropout, data augmentation, 'noise nodes', 'label noise', and input-data noise. While the additional noise is generally believed to improve generalisation performance, there is little mathematical understanding of how this is achieved. In this talk I will describe recent work, together with Anna Shalova (TU/e) and André Schlichting (Münster), in which we analyse a fairly general class of iterative training schemes with noise injection. In the limit of small noise, we prove convergence of the appropriately rescaled time courses to solutions of an auxiliary evolution equation. This auxiliary equation is a gradient flow driven by a functional for which we obtain an explicit expression, thus opening the door to understanding the different types of regularisation generated by different types of noise injection.

Carola Schönlieb

University of Cambridge

Data-driven Regularization for Inverse Problems - the Dos and Don’ts  |  Inverse problems are about the reconstruction of an unknown physical quantity from indirect measurements. They appear in a variety of places, from medical imaging, for instance MRI or CT, to remote sensing, for instance Radar, to material sciences and molecular biology, for instance electron microscopy. Here, inverse problems is a tool for looking inside specimen, resolving structures beyond the scale visible to the naked eye, and to quantify them. It is a mean for diagnosis, prediction and discovery. Most inverse problems of interest are ill-posed and require appropriate mathematical treatment for recovering meaningful solutions. Classically, such approaches are derived almost conclusively in a knowledge driven manner, constituting handcrafted mathematical models. Examples include variational regularization methods with Tikhonov regularization, the total variation and several sparsity-promoting regularizers such as the L1 norm of Wavelet coefficients of the solution. While such handcrafted approaches deliver mathematically rigorous and computationally robust solutions to inverse problems, they are also limited by our ability to model solution properties accurately and to realise these approaches in a computationally efficient manner. Recently, a new paradigm has been introduced to the regularization of inverse problems, which derives solutions to inverse problems in a data driven way. Here, the inversion approach is not mathematically modelled in the classical sense, but modelled by highly over-parametrised models, typically deep neural networks, that are adapted to the inverse problems at hand by appropriately selected training data. Current approaches that follow this new paradigm distinguish themselves through solution accuracies paired with computational efficieny that were previously unconceivable. In this lecture I will give an introduction to this new data-driven paradigm for inverse problems. Presented methods include data-driven variational models and plug-and-play approaches, learned iterative schemes aka learned unrolling, and learned post-processing. Throughout presenting these methodologies, we will discuss their theoretical properties and provide numerical examples for image denoising, deconvolution and computed tomography reconstruction. The lecture will finish with a discussion of open problems and future perspectives.

Katharina Schratz

Sorbonne University

Resonances as a Computational Tool  |  A large toolbox of numerical schemes for dispersive equations has been established, based on different discretization techniques such as discretizing the variation-of-constants formula (e.g., exponential integrators) or splitting the full equation into a series of simpler subproblems (e.g., splitting methods). In many situations these classical schemes allow a precise and efficient approximation. This, however, drastically changes whenever non-smooth phenomena enter the scene such as for problems at low regularity and high oscillations. Classical schemes fail to capture the oscillatory nature of the solution, and this may lead to severe instabilities and loss of convergence. In this talk I present a new class of resonance based schemes. The key idea in the construction of the new schemes is to tackle and deeply embed the underlying nonlinear  structure of resonances into the numerical discretization. As in the continuous case, these terms are central to structure preservation and offer the new schemes strong geometric properties at low regularity. I will present the key idea behind resonances as a computational tool, their high order counterpart (via tree series inspired by singular SPDEs), their error estimates in low regularity spaces (via discrete Bourgain spaces) and numerical experiments on nonlinear dispersive quantization effects.  I also want to address the fundamental question of resonance based schemes and structure preservation, i.e., central symmetries and even more so symplecticity.

Werner Seiler

University of Kassel

Singularities of Algebraic Differential Equations  |  We discuss a framework for defining and detecting singularities of arbitrary fully non-linear systems of ordinary or partial differential equations with polynomial non-linearities. It combines concepts from differential topology with methods from differential algebra and algebraic geometry. With its help, we introduce the notion of a regularity decomposition of a differential equation at a given order and the existence of such decompositions is proven by presenting an algorithm for their effective determination (with an implementation in Maple). Finally, we will discuss some results on the existence, (non-)uniqueness and regularity of solutions to singular initial value problems for quasi-linear ordinary differential equations.

Andreas Wiese

TU Munich

Approximation Algorithms for Packing Problems  |  Many problems in combinatorial optimization are NP-hard, and thus we do not expect to find efficient (polynomial time) algorithms for them. However, often it is possible to find an efficient algorithm that computes a provably near-optimal solution, i.e., a solution whose quality differs from the optimum only by a bounded factor. Such an algorithm is called an approximation algorithm. In this talk, we will discuss approximation algorithms for packing problems, like the classical knapsack problem and generalizations of it. During the last years, there has been significant advances on such problems like the two-dimensional geometric knapsack problem and the unsplittable flow problem on a path. In this talk, we will survey these recent results, including the (currently best) (4/3+epsilon)-approximation algorithm for two-dimensional knapsack and a (1+epsilon)-approximation algorithm for unsplittable flow problem on a path. In particular, in these new algorithms, sparse sketches of the optimal solution play an important role.