A Data Processing

Project group A focuses on problems arising from signal processing and machine learning that are connected with sparsity (broadly construed).

Many modern data processing methods rely on sparsity concepts in one or the other way. The most prominent example is probably compressive sensing, which provides efficient methods to recover (approximately) sparse vectors (signals, images, functions, data sets etc.) from a small number of linear measurements. In abstract terms, this amounts to finding an element in the nonlinear set of vectors of a given sparsity consistent with an underdetermined system of linear equations. While in general this is an NP-hard problem, a number of efficient algorithms have been introduced that can provably recover the original vector under certain conditions on the measurement matrix describing the linear system. Convex relaxation leads to the arguably best understood method of l1-minimization. While it is an open problem of the field to come up with optimal deterministic measurement matrices, several ensembles of random matrices have been shown to yield recovery of s-sparse vectors from the minimal number of measurements with high probability. From a practical point of view it is important to work with structured random matrices generated with a small amount of randomness. Relevant examples are related to recovery from random samples of the Fourier transform and from subsampled random convolutions.

An important extension of compressive sensing considers the recovery of matrices of low rank from an incomplete set of linear (random) measurements. Affine rank minimization is NP-hard and convex relaxation leads to nuclear norm minimization. Recovery from an optimal number of measurements in terms of the rank and the matrix dimension can be shown for Gaussian random measurement maps. For applications such as in quantum state tomography it is again important to have structure in the measurement process and to reduce randomness.

High dimensional data can often be represented efficiently as low rank tensors, generalizing low rank matrices to higher dimensional arrays. Unfortunately, extending the described ideas further from matrices to tensors is very challenging and despite the fact that several extensions of algorithms to the tensor case work well empirically, only partial theoretical results are available and a convincing mathematical theory of low rank tensor recovery is still missing.

As described above, deep learning is a highly popular machine learning method based on neural networks with many layers. Despite the great empirical success of the related methods, the underlying mathematics is in its beginnings and it can strongly be expected that ideas and proof methods from sparsity related mathematics (compressive sensing) will play an important role. A particularly challenging question concerns convergence properties of the commonly used (stochastic) gradient descent type algorithms for minimizing the nonconvex functionals arising in the learning process for deep neural networks. The use of deep learning approaches for inverse problems has also seen promising successes in recent years, but many research questions also remain here. For instance, one may view the iterations of certain optimization problems for sparsity-based recovery, so-called forward-backward splitting methods including iterative soft-thresholding, as layers of a neural network. Making parts of the parameters of this network learnable allows for a simple way of combining models for the forward problems with data-driven approaches.

Group A contains the following individual projects related to the topics outlined above.

A01 Gradient descent for deep neural network learning (Holger Rauhut, Michael Westdickenberg)

A02 Scattering transforms of sparse signals (Hartmut Führ)

A03 Group actions and t-designs in sparse and low rank matrix recovery (Harmut Führ, Gabriele Nebe, Holger Rauhut)

A06 Theta tensor norms and low rank recovery (Ghislain Fourier, Holger Rauhut)

A07 Signal processing on graphs and complexes (Michael Schaub)

A08 Sparse exit wave reconstruction via deep unfolding (Benjamin Berkels)

A09 Regularizing neural network classification using random perturbations (Sebastian Krumscheid, Holger Rauhut, Raul Tempone)