Open Topics

We offer multiple Bachelor/Master theses, Guided Research projects and HiWi positions in the area of data mining/machine learning. A non-exhaustive list of open topics is listed below.

If you are interested in an internal/external thesis or a guided research project, please apply through this form (using your TUM Microsoft Account). Please do not write any emails. 

Generative Models for Drug Discovery

Type: Mater Thesis / Guided Research

Prerequisites:

  • Strong machine learning knowledge
  • Proficiency with Python and deep learning frameworks (PyTorch or TensorFlow)
  • Knowledge of graph neural networks (e.g. GCN, MPNN)
  • No formal education in chemistry, physics or biology needed!

Description:

Effectively designing molecular geometries is essential to advancing pharmaceutical innovations, a domain which has experienced great attention through the success of generative models. These models promise a more efficient exploration of the vast chemical space and generation of novel compounds with specific properties by leveraging their learned representations, potentially leading to the discovery of molecules with unique properties that would otherwise go undiscovered. Our topics lie at the intersection of generative models like diffusion/flow matching models and graph representation learning, e.g., graph neural networks. The focus of our projects can be model development with an emphasis on downstream tasks (e.g., diffusion guidance at inference time) and a better understanding of the limitations of existing models.

ContactLeon Hetzel

References:

Equivariant Diffusion for Molecule Generation in 3D

Equivariant Flow Matching with Hybrid Probability Transport for 3D Molecule Generation

Structure-based Drug Design with Equivariant Diffusion Models

 

Differentially Private Computer Vision

Type: Hiwi

Prerequisites:

  • Strong machine learning knowledge
  • Proficiency with Python and deep learning frameworks (preferably PyTorch)
  • Experience in training vision models (e.g. image classifiers or segmentation models) desirable

Description:

In domains like medical imaging, machine learning models may be trained on sensitive data. Deep learning with differential privacy (DP) provides provable guarantees that such sensitive training data cannot be extracted from a model by malicious actors. However, these privacy guarantees are very generic and hold for arbitrary datasets (images, tabular data, natural language, ...). The objective of this project is to adapt deep learning with DP to the image domain in order to train models with stronger domain-specific privacy guarantees. We already have preliminary theoretical results. Thus, the initial focus will be on implementation and experimental evaluation. However, there are various opportunities for you to also generalize these theoretical contributions. The ultimate goal is a workshop publication at NeurIPS / ICLR / ICML (or a full paper, if we can further generalize our results).

Contact: Jan Schuchardt

References:

  1. How to DP-fy ML: A Practical Guide to Machine Learning with Differential Privacy
  2. Deep Learning with Differential Privacy
  3. The Algorithmic Foundations of Differential Privacy

Trajectory Prediction for Traffic Data

Type: Hiwi (~12h/week)

Prerequisites:

  • Strong knowledge in machine learning
  • Very good coding skills
  • Proficiency with Python and deep learning frameworks (TensorFlow or PyTorch)

Description:

Machine Learning plays a pivotal role in the optimization of infrastructure planning, enabling data-driven decision-making for urban development. It allows for a detailed analysis of individual agent behaviors, facilitating informed interventions in city planning. By modeling individual trajectories on street network graphs, we can extract valuable insights into individual mobility patterns and congestion scenarios. This research project is centered around two questions: 1. Trajectory Modeling: Can machine learning techniques be employed to generate realistic trajectories on street network data? 2. Path Prediction: Given a source and destination pair within a street network, can we accurately predict the most likely path an agent would take? Our objective is to develop and apply geometric deep learning methods, e.g. simplicial complex networks, to various traffic datasets, with the ultimate aim of predicting routes based on recently collected mobility data in Munich.

Contact: Dominik Fuchsgruber

References:

  1. Principled Simplicial Neural Networks for Trajectory Prediction
  2. Extrapolating paths with graph neural networks
  3. Signal Processing on Simplicial Complexes
  4. Modeling trajectories with recurrent neural networks

Efficient Machine Learning: Pruning, Quantization, Distillation, and More

Type: Master's Thesis / Working Student / Intern

Prerequisites:

  • Strong knowledge in machine learning
  • Proficiency with Python and deep learning frameworks (TensorFlow or PyTorch)

Description:

The efficiency of machine learning algorithms is commonly evaluated by looking at target performance, speed and memory footprint metrics. Reduce the costs associated to these metrics is of primary importance for real-world applications with limited ressources (e.g. embedded systems, real-time predictions). In this project, you will investigate solutions to improve the efficiency of machine leanring models by looking at multiple techniques like pruning, quantization, distillation, and more.

Contact: Dr. Bertrand Charpentier, Johanna Sommer

Apply here: careers.pruna.ai

References:

  1. The Efficiency Misnomer
  2. A Gradient Flow Framework for Analyzing Network Pruning
  3. Distilling the Knowledge in a Neural Network
  4. A Survey of Quantization Methods for Efficient Neural Network Inference

 

Deep Generative Models

Type: Master Thesis / Guided Research

Prerequisites:

  • Strong machine learning and probability theory knowledge
  • Proficiency with Python and deep learning frameworks (TensorFlow or PyTorch)
  • Knowledge of generative models and their basics (e.g., Normalizing Flows, Diffusion Models, VAE)
  • Optional: Neural ODEs/SDEs, Optimal Transport, Measure Theory

Description:

With recent advances, such as Diffusion Models, Transformers, Normalizing Flows, Flow Matching, etc., the field of generative models has gained significant attention in the machine learning and artificial intelligence research community. However, many problems and questions remain open, and the application to complex data domains such as graphs, time series, point processes, and sets is often non-trivial. We are interested in supervising motivated students to explore and extend the capabilities of state-of-the-art generative models for various data domains.

Contact: Marcel Kollovieh, David Lüdke

References:

Graph Transformer

Type: Master's thesis / Bachelor's thesis / Guided research

Prerequisites:

  • Strong knowledge of machine learning and deep learning
  • Proficiency with Python and deep learning frameworks (TensorFlow or PyTorch)
  • Ideally: Knowledge of Graph Neural Networks and Graph Theory

Description:

Graph Transformers have emerged as a promising method for handling graph-structured data by combining the power of transformers with graph representation learning. Unlike traditional graph neural networks that rely on local message passing, Graph Transformers leverage self-attention mechanisms to capture long-range dependencies across nodes in the graph. This allows for more flexible and expressive representations of complex graphs, which is beneficial for a wide range of tasks such as molecular property prediction or social network analysis. Despite their potential, Graph Transformers still face challenges in scalability and model interpretability, and there are numerous opportunities to enhance their efficiency, extend their application to large-scale graphs, and explore their theoretical properties.

Contact: Niklas Kemper

References:

Graph Neural Networks

Type: Master's thesis / Bachelor's thesis / guided research

Prerequisites:

  • Strong machine learning knowledge
  • Proficiency with Python and deep learning frameworks (TensorFlow or PyTorch)
  • Knowledge of graph neural networks (e.g. GCN, MPNN)
  • Knowledge of graph/network theory

Description:

Graph neural networks (GNNs) have recently achieved great successes in a wide variety of applications, such as chemistry, reinforcement learning, knowledge graphs, traffic networks, or computer vision. These models leverage graph data by updating node representations based on messages passed between nodes connected by edges, or by transforming node representation using spectral graph properties. These approaches are very effective, but many theoretical aspects of these models remain unclear and there are many possible extensions to improve GNNs and go beyond the nodes' direct neighbors and simple message aggregation.

Contact: Simon Geisler

References:

  1. Semi-supervised classification with graph convolutional networks
  2. Relational inductive biases, deep learning, and graph networks
  3. Diffusion Improves Graph Learning
  4. Weisfeiler and leman go neural: Higher-order graph neural networks
  5. Reliable Graph Neural Networks via Robust Aggregation

 

Physics-aware Graph Neural Networks

Type: Master's thesis / guided research

Prerequisites:

  • Strong machine learning knowledge
  • Proficiency with Python and deep learning frameworks (JAX or PyTorch)
  • Knowledge of graph neural networks (e.g. GCN, MPNN, SchNet)
  • Optional: Knowledge of machine learning on molecules and quantum chemistry

Description:

Deep learning models, especially graph neural networks (GNNs), have recently achieved great successes in predicting quantum mechanical properties of molecules. There is a vast amount of applications for these models, such as finding the best method of chemical synthesis or selecting candidates for drugs, construction materials, batteries, or solar cells. However, GNNs have only been proposed in recent years and there remain many open questions about how to best represent and leverage quantum mechanical properties and methods.

Contact: Nicholas Gao

References:

  1. Directional Message Passing for Molecular Graphs
  2. Neural message passing for quantum chemistry
  3. Learning to Simulate Complex Physics with Graph Network
  4. Ab initio solution of the many-electron Schrödinger equation with deep neural networks
  5. Ab-Initio Potential Energy Surfaces by Pairing GNNs with Neural Wave Functions
  6. Tensor field networks: Rotation- and translation-equivariant neural networks for 3D point clouds

 

Robustness Verification for Deep Classifiers

Type: Master's thesis / Guided research

Prerequisites:

  • Strong machine learning knowledge (at least equivalent to IN2064 plus an advanced course on deep learning)
  • Strong background in mathematical optimization (preferably combined with Machine Learning setting)
  • Proficiency with python and deep learning frameworks (Pytorch or Tensorflow)
  • (Preferred) Knowledge of training techniques to obtain classifiers that are robust against small perturbations in data

Description: Recent work shows that deep classifiers suffer under presence of adversarial examples: misclassified points that are very close to the training samples or even visually indistinguishable from them. This undesired behaviour constraints possibilities of deployment in safety critical scenarios for promising classification methods based on neural nets. Therefore, new training methods should be proposed that promote (or preferably ensure) robust behaviour of the classifier around training samples.

Contact: Aleksei Kuvshinov

References (Background):

  1. Intriguing properties of neural networks
  2. Explaining and harnessing adversarial examples
  3. SoK: Certified Robustness for Deep Neural Networks

References:

  1. Certified Adversarial Robustness via Randomized Smoothing
  2. Formal guarantees on the robustness of a classifier against adversarial manipulation
  3. Towards deep learning models resistant to adversarial attacks
  4. Provable defenses against adversarial examples via the convex outer adversarial polytope
  5. Certified defenses against adversarial examples
  6. Lipschitz-margin training: Scalable certification of perturbation invariance for deep neural networks

 

Uncertainty Estimation in Deep Learning

Type: Master's Thesis / Guided Research

Prerequisites:

  • Strong knowledge in machine learning
  • Strong knowledge in probability theory
  • Proficiency with Python and deep learning frameworks (TensorFlow or PyTorch)

Description:

Safe prediction is a key feature in many intelligent systems. Classically, Machine Learning models compute output predictions regardless of the underlying uncertainty of the encountered situations. In contrast, aleatoric and epistemic uncertainty bring knowledge about undecidable and uncommon situations. The uncertainty view can be a substantial help to detect and explain unsafe predictions, and therefore make ML systems more robust. The goal of this project is to improve the uncertainty estimation in ML models in various types of task.

Contact: Tom Wollschläger, Dominik Fuchsgruber, Bertrand Charpentier

References:

  1. Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift
  2. Predictive Uncertainty Estimation via Prior Networks
  3. Posterior Network: Uncertainty Estimation without OOD samples via Density-based Pseudo-Counts
  4. Evidential Deep Learning to Quantify Classification Uncertainty
  5. Weight Uncertainty in Neural Networks

 

Hierarchies in Deep Learning

Type: Master's Thesis / Guided Research

Prerequisites:

  • Strong machine learning knowledge
  • Proficiency with Python and deep learning frameworks (TensorFlow or PyTorch)

Description:

Multi-scale structures are ubiquitous in real life datasets. As an example, phylogenetic nomenclature naturally reveals a hierarchical classification of species based on their historical evolutions. Learning multi-scale structures can help to exhibit natural and meaningful organizations in the data and also to obtain compact data representation. The goal of this project is to leverage multi-scale structures to improve speed, performances and understanding of Deep Learning models.

Contact: Marcel Kollovieh, Bertrand Charpentier

References:

  1. Tree Sampling Divergence: An Information-Theoretic Metricfor Hierarchical Graph Clustering
  2. Hierarchical Graph Representation Learning with Differentiable Pooling
  3. Gradient-based Hierarchical Clustering
  4. Gradient-based Hierarchical Clustering using Continuous Representations of Trees in Hyperbolic Space