Martin Skutella (TU Berlin)

SeMath
Colloquium

Title: On the Expressivity of Neural Networks
Abstract: We study the class of functions that can be represented by a neural network with ReLU activations and a given architecture. We provide a mathematical counterbalance to the universal approximation theorems which suggest that a single hidden layer is sufficient for learning any function. In particular, we investigate whether the class of exactly representable functions strictly increases by adding more layers. We also present results on how basic problems in combinatorial optimization can be solved via neural network with ReLU activations. Our approach builds on techniques from mixed-integer and combinatorial optimization, polyhedral theory, as well as, discrete and tropical geometry. The talk is based on joint work with Christoph Hertrich, Amitabh Basu, and Marco Di Summa.

Back