Zum Inhalt springen
  • de
  • en
  • Data Analytics and Machine Learning Group
  • TUM School of Computation, Information and Technology
  • Technische Universität München
Technische Universität München
  • Startseite
  • Team
    • Stephan Günnemann
    • Sirine Ayadi
    • Tim Beyer
    • Jonas Dornbusch
    • Eike Eberhard
    • Dominik Fuchsgruber
    • Nicholas Gao
    • Simon Geisler
    • Lukas Gosch
    • Filippo Guerranti
    • Leon Hetzel
    • Niklas Kemper
    • Amine Ketata
    • Marcel Kollovieh
    • Anna-Kathrin Kopetzki
    • Arthur Kosmala
    • Aleksei Kuvshinov
    • Richard Leibrandt
    • Marten Lienen
    • David Lüdke
    • Aman Saxena
    • Sebastian Schmidt
    • Yan Scholten
    • Jan Schuchardt
    • Leo Schwinn
    • Johanna Sommer
    • Tom Wollschläger
    • Alumni
      • Amir Akbarnejad
      • Roberto Alonso
      • Bertrand Charpentier
      • Marin Bilos
      • Aleksandar Bojchevski
      • Johannes Gasteiger, né Klicpera
      • Maria Kaiser
      • Richard Kurle
      • Hao Lin
      • John Rachwan
      • Oleksandr Shchur
      • Armin Moin
      • Daniel Zügner
  • Lehre
    • Sommersemester 2025
      • Advanced Machine Learning: Deep Generative Models
      • Applied Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
      • Seminar: Current Topics in Machine Learning
    • Wintersemester 2024/25
      • Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
      • Seminar: Current Topics in Machine Learning
    • Sommersemester 2024
      • Machine Learning for Graphs and Sequential Data
      • Advanced Machine Learning: Deep Generative Models
      • Applied Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
    • Wintersemester 2023/24
      • Machine Learning
      • Applied Machine Learning
      • Seminar: Selected Topics in Machine Learning Research
      • Seminar: Machine Learning for Sequential Decision Making
    • Sommersemester 2023
      • Machine Learning for Graphs and Sequential Data
      • Advanced Machine Learning: Deep Generative Models
      • Large-Scale Machine Learning
      • Seminar
    • Wintersemester 2022/23
      • Machine Learning
      • Large-Scale Machine Learning
      • Seminar
    • Sommersemester 2022
      • Machine Learning for Graphs and Sequential Data
      • Large-Scale Machine Learning
      • Seminar (Selected Topics)
      • Seminar (Time Series)
    • Wintersemester 2021/22
      • Machine Learning
      • Large-Scale Machine Learning
      • Seminar
    • Sommersemester 2021
      • Machine Learning for Graphs and Sequential Data
      • Large-Scale Machine Learning
      • Seminar
    • Wintersemester 2020/21
      • Machine Learning
      • Large-Scale Machine Learning
      • Seminar
    • Sommersemester 2020
      • Machine Learning for Graphs and Sequential Data
      • Large-Scale Machine Learning
      • Seminar
    • Wintersemester 2019/20
      • Machine Learning
      • Large-Scale Machine Learning
    • Sommersemester 2019
      • Mining Massive Datasets
      • Large-Scale Machine Learning
      • Oberseminar
    • Wintersemester 2018/19
      • Machine Learning
      • Large-Scale Machine Learning
      • Oberseminar
    • Sommersemester 2018
      • Mining Massive Datasets
      • Large-Scale Machine Learning
      • Oberseminar
    • Wintersemester 2017/18
      • Machine Learning
      • Oberseminar
    • Sommersemester 2017
      • Robust Data Mining Techniques
      • Efficient Inference and Large-Scale Machine Learning
      • Oberseminar
    • Wintersemester 2016/17
      • Mining Massive Datasets
    • Sommersemester 2016
      • Large-Scale Graph Analytics and Machine Learning
    • Wintersemester 2015/16
      • Mining Massive Datasets
    • Sommersemester 2015
      • Data Science in the Era of Big Data
    • Machine Learning Lab
  • Forschung
    • Robust Machine Learning
    • Machine Learning for Graphs/Networks
    • Machine Learning for Temporal and Dynamical Data
    • Bayesian (Deep) Learning / Uncertainty
    • Efficient ML
    • Code
  • Publikationen
  • Offene Stellen
    • FAQ
  • Abschlussarbeiten

News

Three papers accepted at ICML 2021; one at ECML-PKDD; one at IJCAI

10.05.2021


Our group has three papers accepted at the 2021 International Conference on Machine Learning (ICML):

  • Johannes Klicpera, Marten Lienen, Stephan Günnemann
    Scalable Optimal Transport in High Dimensions for Graph Distances, Embedding Alignment, and More
    International Conference on Machine Learning (ICML), 2021
    Summary: Entropy-regularized optimal transport (OT) requires a full pairwise cost matrix between all pairs of objects. The resulting quadratic runtime prohibits the use of OT in large-scale machine learning problems. We propose two approximations to the cost matrix with log-linear runtime: first, a sparse approximation based on locality sensitive hashing (LSH) and, second, locally corrected Nyström (LCN), a low rank approximation with LSH-based sparse corrections. Our approximations speed up a state-of-the-art method for unsupervised word embedding alignment 3x and improve the accuracy by 3.1 percentage points. For graph distance regression, we propose the graph transport network (GTN), which combines graph neural networks with LCN. GTN outcompetes previous models by 48% and scales log-linearly in the size of the graphs.

  • Anna-Kathrin Kopetzki, Bertrand Charpentier, Daniel Zügner, Sandhya Giri, Stephan Günnemann
    Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?
    International Conference on Machine Learning (ICML), 2021
    Summary: Dirichlet-based uncertainty (DBU) models are a recent and promising class of uncertainty-aware models. We present the first large-scale, in-depth study of the robustness of DBU models under adversarial attacks and show that uncertainty estimates of DBU models are not robust w.r.t. three important tasks: (1) indicating correctly and wrongly classified samples; (2) detecting adversarial examples; and (3) distinguishing between in-distribution (ID) and out-of-distribution (OOD) data. Additionally, we explore the first approaches to make DBU models more robust. 

  • Marin Biloš, Stephan Günnemann
    Scalable Normalizing Flows for Permutation Invariant Densities
    International Conference on Machine Learning (ICML), 2021
    Summary: Modeling sets is an important problem in machine learning since this type of data can be found in many domains. Examples include point clouds, items in a shopping cart, tracking household electricity consumption in a city etc. A promising approach defines a family of symmetric densities with continuous normalizing flows. This allows us to maximize the likelihood directly and sample new realizations with ease. However, calculating the trace of the Jacobian, a crucial step in this method, raises issues that occur both during training and inference, limiting its practicality. We propose an alternative way of defining permutation equivariant transformations that give closed form trace. This leads not only to improvements while training, but also to better final performance. We demonstrate the benefits of our approach on point processes and general set modeling.

Furthermore, we have one paper accepted at the journal track of the 2021 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD) and one paper at the 2021 International Joint Conference on Artificial Intelligence (IJCAI):

  • Anna-Kathrin Kopetzki, Stephan Günnemann
    Reachable sets of classifiers and regression models: (non-)robustness analysis and robust training
    Machine Learning Journal, 2021
    Summary: Understanding the behavior of neural networks is an open challenge that requires questions to be addressed on the robustness, explainability and reliability of predictions. We answer these questions by computing reachable sets of neural networks, i.e. sets of outputs resulting from continuous sets of inputs. We provide two efficient approaches that lead to over- and under-approximations of the reachable set and use them to (1) analyze and enhance robustness properties of classifiers and regression models (2) provide techniques to distinguish between reliable and non-reliable predictions for unlabeled inputs and (3) to quantify the influence of each feature on a prediction and compute a feature ranking.
     
  • Oleksandr Shchur, Ali Caner Türkmen, Tim Januschowski, Stephan Günnemann
    Neural Temporal Point Processes: A Review
    International Joint Conference on Artificial Intelligence (IJCAI), 2021
    Summary: In this paper we review neural temporal point processes (TPPs) -- flexible generative models for continuous-time event sequences. We describe the important design choices for neural TPPs, review established and emerging applications for these models, and discuss the main challenges that the research field of neural TPPs currently faces.

Congratulations to all co-authors!


◄ Zurück zu: Alle News
To top

Informatik 26 - Data Analytics and Machine Learning


Prof. Dr. Stephan Günnemann

Technische Universität München
TUM School of Computation, Information and Technology
Department of Computer Science
Boltzmannstr. 3
85748 Garching 

Sekretariat:
Raum 00.11.057
Tel.: +49 89 289-17256
Fax: +49 89 289-17257

  • Datenschutz
  • Impressum
  • Barrierefreiheit