Previous talks at the SCCS Colloquium

Tim Waegemans: Adversarial Attacks on Gaussian Processes

SCCS Colloquium |


Deep Neural Networks have been shown to be vulnerable towards adversarial attacks. Small changes like changing a single pixel or adding noise that is visually hard to spot can lead to a completely different classification. This is a problem for critical applications that need robust models.
A Gaussian Process can be derived from an infinitely wide neural network with random weights. This can even be extended to deep and convolutional neural networks. This thesis will explore adversarial attacks on the derived Gaussian Process Regression. Checking if the attacks on regular neural networks work on the infinitely wide case. Gaussian Process Regression also provides a confidence bound for predictions. Hence, we will analyze not only if the classification can be manipulated. By considering the uncertainty estimate in the manipulated cases, we furthermore evaluate the robustness of the model. Since a high uncertainty of the model gives insight for applications to treat the prediction carefully. Which would be a desirable output for data points that do not lay in the domain where the model performs well on.

Keywords: Adversarial Attacks, Convolutional Gaussian Processes, Gaussian Process Regression

Master's thesis submission talk (Data Engineering and Analytics). Tim is advised by Dr. Felix Dietrich.