Title: Neural Troubles in Neural Galerkin (and why they are inevitable)
Abstract: Neural Galerkin is a nonlinear counterpart of classical dynamical approximation strategies for time-dependent PDEs. It approximates solutions by neural networks whose weights evolve in time. At first glance, the method is general, elegant, easy to implement, and it indeed has enjoyed notable empirical success.
A closer look, however, quickly leads to troubles: Formulating Neural Galerkin at a proper functional level and connecting it rigorously to practical algorithms raises a number of very subtle challenges. Some stem from sampling issues; others arise from the fact that neural network classes are, mathematically speaking, not particularly well-behaved sets.
In this talk, I will illustrate several of these troubles, explain where they come from, and discuss why they are not mere technical nuisances but key conceptual elements to understand in neural approximations.
Joint work with Daan Bon, Benjamin, and Mark Peletier (TU Eindhoven).
(Host: Markus Bachmayr)