Classification via trained deep neural networks is often very sensitive to adversarial noise on the input. We will investigate several approaches for increasing the robustness of deep learning models including randomized smoothing, randomness on the network parameters and constraints on the parameters during training. We aim at mathematical robustness guarantees. Furthermore, we will extend a new variant of stochastic gradient descent (multi-iteration stochastic estimator) recently introduced by PI Tempone for the training and will analyze its convergence properties.
- Shan Wei