B01 Nonlinear reduced modeling for state and parameter estimation
The goal of this project is to develop nonlinear reduced models for parameter dependent families of PDEs. We combine machine learning concepts, involving Deep Neural Networks (DNNs), with stable variational formulations to warrant a rigorous accuracy quantification for a wide range of problem types. Primary research topics include state or parameter estimation as well as the identification and analysis of appropriate notions of compositional sparsity to understand when the use of DNNs allows one to avoid the curse of dimensionality.
- Prof. Dr. Markus Bachmayr
- RWTH Aachen University
- more information
- +49 241 80 93950
- bachmayr@igpm.rwth-aachen.de
- homepage
- Prof. Dr. Wolfgang Dahmen
- RWTH Aachen University
- more information
- dahmen@igpm.rwth-aachen.de
- homepage
Publications
A Comparison Study of Supervised Learning Techniques for the Approximation of High Dimensional Functions and Feedback Control
Journal Article 2025
Optimization-Free Diffusion Model - A Perturbation Theory Approach
Preprint 2025
Expansive Natural Neural Gradient Flows for Energy Minimization
Preprint 2025
DPG loss functions for learning parameter-to-solution maps by neural networks
Preprint 2025
Variationally Correct Neural Residual Regression for Parametric PDEs: On the Viability of Controlled Accuracy
Preprint 2024
S-SOS: Stochastic Sum-Of-Squares for Parametric Polynomial Optimization
Preprint 2024
Towards Continuous Mathematical Models for the Analysis of Classes of Deep Neural Networks
Journal Article 2024
Some Thoughts on Compositional Tensor Networks
Journal Article 2024
Accuracy Controlled Schemes for the Eigenvalue Problem of the Radiative Transfer Equation
Preprint 2023
Solving PDEs with Incomplete Information
Preprint 2023
Compositional Sparsity, Approximation Classes, and Parametric Transport Equations
Preprint 2023
Least squares solvers for ill-posed PDEs that are conditionally stable
Preprint 2022