Our group has two papers accepted at the 2021 International Conference on Learning Representations (ICLR).
- Jan Schuchardt, Aleksandar Bojchevski, Johannes Klicpera, Stephan Günnemann
Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks
International Conference on Learning Representations (ICLR), 2021
In tasks like node classification (e.g. with Graph Neural Networks), image segmentation, and named-entity recognition we have a classifier that simultaneously outputs multiple predictions (a vector of labels) based on a single input, i.e. a single graph, image, or document respectively. Existing adversarial robustness certificates consider each prediction independently and are thus overly pessimistic for such tasks. In our work we propose the first collective robustness certificate which computes the number of predictions which are simultaneously guaranteed to remain stable under perturbation, i.e. cannot be attacked. This is specifically useful when operating with Graph Neural Networks.
- Daniel Zügner, Tobias Kirschstein, Michele Catasta, Jure Leskovec, Stephan Günnemann
Language-Agnostic Representation Learning of Source Code from Structure and Context
International Conference on Learning Representations (ICLR), 2021
Source code (Context) and its parsed abstract syntax tree (AST; Structure) are two complementary representations of the same computer program. We propose the CODE TRANSFORMER, which jointly learns on Context and Structure of source code. In contrast to previous approaches, our model uses only language-agnostic features, i.e., source code and features that can be computed directly from the AST. Our model obtains state-of-the-art performance on code summarization on five different programming languages. Besides these results for training on individual languages, the language-agnostic nature of our model allows us to train it jointly on multiple programming languages, thus, being the first multilingual code summarization model.
Furthermore, we have one paper accepted at the International Conference on Artificial Intelligence and Statistics (AISTATS).
- Yihan Wu, Aleksandar Bojchevski, Aleksei Kuvshinov, Stephan Günnemann
Completing the Picture: Randomized Smoothing Suffers from Curse of Dimensionality for a Large Family of Distributions
International Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Randomized smoothing is currently the most competitive technique for providing provable robustness guarantees. Since this approach is model-agnostic and inherently scalable we can certify arbitrary classifiers. Despite its success, recent works show that for a small class of i.i.d. distributions, the largest radius that can be certified using randomized smoothing decreases with increasing dimensionality. We complete the picture and show that similar results hold for a much more general family of distributions which are continuous and symmetric about the origin.
Congratulations to all co-authors!