27.11 at 10:00: Invited talk by Adrian Kulmburg: Approximability of the Containment Problem for Zonotopes and Ellipsotopes

20.11 at 10:00: Master thesis (Shlomo Libo Feigin): "The Role of Data Augmentations in Joint Embedding Self-Supervised Learning: A Theoretical Analysis"

20.11 not before 10:30: We will continue User-friendly introduction to PAC-Bayes bounds (Chapter 4)

06.11 at 10:00: Bachelor thesis presentation, Mortiz Friedemann, Analysis of the Impact of Batch Normalization on Deep Graph Convolutional Networks

06.11 not before 10:30: We will continue User-friendly introduction to PAC-Bayes bounds (Finish Chapter 2, read Chapter 3)

16.10. at 10:00. Roeya Khlifi, "Analyzing News Content and Evaluating GPT Neutrality"

7.8.: Anna van Elst (Master thesis) "Tight PAC-Bayesian Risk Certificates for Contrastive Learning"

7.8. Lena Libon (Bachelor thesis) "Analysis of Formal Language Recognition by Transformers"

31.7. Nil Ayday (Master thesis) "Tight Characterization of Generalisation Error for Semi-supervised Learning Using Linear Graph Neural Networks"

24.07. Paper reading "FALKON: An Optimal Large Scale Kernel Method" (Mae will present)

10.07. Debarghya will talk on NN-GPs

03.07. Paper reading: We will read main ideas of 2 papers: "Probabilistic Self-supervised Learning via Scoring Rules Minimization" and "A Probabilistic Model behind Self-Supervised Learning"

26.06. Paper reading "Robustness and Regularization of Support Vector Machines" (Maha will present)

19.06 Invited Talk: Leonardo Galli on "Don't be so Monotone: Relaxing Stochastic Line Search in Over-Parameterized Models"

12.06 Paper reading: On the Surrogate Gap between Contrastive and Supervised Losses

29.05 Brainstorming session on NN-GPs for SSL or even PCA. We read one of the NN-GP papers (https://openreview.net/pdf?id=B1EA-M-0Z), and on probabilistic PCA (https://www.cs.columbia.edu/~blei/seminar/2020-representation/readings/TippingBishop1999.pdf)

16.05 (exceptional on Thursday at 10:00) Invited talk by Guy Amir on "Formal verification of AI" 

15.05 Kernel Memory Networks: A Unifying Framework for Memory Modeling

24.04 Bachelor Thesis Presentation by Omar Debouni - "Learning Cluster Specific Representations"

17.04 Learning Curves for Gaussian Processes (Max will present, here are his slides)

10.04 Benign Overfitting in Linear Regression (Nil Ayday will present, here are her slides)

03.04 Master thesis presentation by Martin Eppert: "Provable Convergence of Projection Pursuit for Unbalanced Data"

27.03 The Expressive Power of Transformers with Chain of Thought

20.03 A Logic for Expressing Log-Precision Transformers

12.02. Master thesis presentation by Alexandru Craciun: "On the Stability of Gradient Descent for Large Learning Rate". Note: This is a Monday! The talk will take place at 15:00 in Room 03.06.011.

31.01. Paper reading: "Not too little, not too much: a theoretical analysis of graph (over)smoothing"

24.01. Paper reading: "Neural Harmonics: Bridging Spectral Embedding and Matrix Completion in Self-Supervised Learning"

17.01. Paper reading: "Adversarially Robust Low Dimensional Representations"

10.01. Group discussion with Optimization and Data Analysis group (Prof. Felix Krahmer) including a talk by Pascal Esser on self-supervised representation learning

20.12. Paper reading: "On The Adversarial Robustness of Principal Component Analysis"

13.12. Paper reading: "The Shape of Learning Curves: a Review" 

06.12. Paper reading: "Also for k-means more data does not imply better performance"

29.11. Paper reading: "Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting"

22.11. Paper reading: "The Eigenlearning Framework: A Conservation Law Perspective on Kernel Regression and Wide Neural Networks"

15.11. Paper reading: "Remember What You Want to Forget: Algorithms for Machine Unlearning" (Satyaki)

25.10. Thesis presentations.

  • 10:30: Emre Demir (Master). Landscape Analysis for Multi-Objective Hardware-Aware Neural Architecture Search in Earth Observation Applications
  • 11:15: Omar Bouattour (Bachelor). Machine Learning Surrogates for Rare Event Estimation: A Comparative Study of Artificial Neural Networks and Kriging

26.09. Paper presentation: "Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension" by Moritz Haas 

11.07 Project update: "Interpretable models for clustering with pairwise similarities" by Khushi Kirar

04.07 Project update: "Edge of Stability in Linear Networks" by Alexandru Craciun

20.06 Master thesis presentation: Yunus Cobanoglu

13.06 Project update: "Wasserstein Projection Pursuit" by Satyaki Mukherjee, Martin Eppert 

06.06 Paper reading: Ji et al. "Power of Contrast for Feature Learning: A Theoretical Analysis"

30.05 Master thesis presentation: Arda Sener, "Perception System Validation: Detecting and Identifying Systematic Factors Impacting Frame-Based Detection Rates" 

23.05 Master thesis presentation: Aliya Ablitip, "Application of DL/ML Tools to Mouse Whole-brain Functional Ultrasound Imaging Data for Discovering Latent Behavioral and Brain States"

16.05 Project update: Yunus Cobanoglu on "Speeding up Graph Neural Nets using sparsification"; Blert Beqa on "Neural tangent kernel of Autoencoders"

09.05 Paper reading: Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels by S. Bombari, S. Kiyani, M. Mondelli (discusssion moderated by Debarghya; everyone is expected to read the main parts of the paper, till page 12, before the meeting)

02.05 Talk: Maximilian Fleissner on "Explainable kernel clustering"

08.02.23 Jiaqi on ''Credible Intervals for Causal Effects in Linear Causal Models''

22.02.23 Presentation by Lukas Gosh on “Revisiting Robustness in Graph Machine Learning” (In person in Room  03.09.014)

07.12.22 Presentation of different Master student projects

30.11.22 Different time: 10:30. Presentation of different Master student projects

16.11.22 James Martens On the validity of kernel approximations for orthogonally-initialized neural networks

09.11.22 Siu Lun Chau, Robert Hu, Javier Gonzalez, Dino Sejdinovic RKHS-SHAP: Shapley Values for Kernel Methods

02.11.22 Iain M. Johnstone, and Debashis Paul PCA in High Dimensions: An orientation We will read till Section 5 (end of page 5), and Appendix A-B

31.08 Master thesis presentation by Vishnuraj Mavilodan 

24.08 Dinh, Pascanu, Bengio, Bengio. Sharp Minima Can Generalize For Deep Nets. ICML 2017

20.07-27.07 Romain Couillet, Zhenyu Liao. Random Matrix Methods for Machine Learning: When Theory meets Applications 

28.06 Andrea Montanari and Kangjie Zhou: Overparametrized linear dimensionality reductions: From projection pursuit to two-layer neural network

22.06 Peter J. Bickel, Gil Kur, and Boaz Nadler Projection pursuit in high dimensions

27.04-15.06 Romain Couillet, Zhenyu Liao. Random Matrix Methods for Machine Learning: When Theory meets Applications 

11.05 Guided Research project presentation by Alicia on 'Robustness of Neural Tangent Kernel'

02.02.2022 Ben Adlam, Jeffrey Pennington: Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition

09.02.2022 Invited talk by Soumendu Sundar Mukherjee of his paper: Learning with latent group sparsity via heat flow dynamics on networks (Subhroshekhar Ghosh, Soumendu Sundar Mukherjee)

16.02.2022 Rodrigo Veiga,  Ludovic Stephan, Bruno Loureiro, Florent Krzakala, and Lenka Zdeborová: Phase diagram of Stochastic Gradient Descent in high-dimensional two-layer neural networks (moderation: Pascal)

23.02.2022 Eduardo Laber, Lucas Murtinho, On the price of explainability for some clustering problems

March - April 2022: No meetings. We will resume in first week of MAy

26.01.2022 Finite Versus Infinite Neural Networks: an Empirical Study (moderator: Maha)

19.01.2022 Zirui Wang, Theoretical Guarantees of Transfer Learning (moderator: Satyaki)

12.01.2022: Jesse van Oostrum, Nihat Ay Parametrisation Independence of the Natural Gradient in Overparametrised Systems (background on Natural Gradient Methods: James Martens, New Insights and Perspectives on the Natural Gradient Method, first 11 pages) (moderator: Pascal)

29.12.2021, 05.01.2022: no meeting

22.12.2021: (meeting at 10:00) Sebastien Bubeck, Mark Sellke A Universal Law of Robustness via Isoperimetry

15.12.2021: Nilesh Tripuraneni, Ben Adlam, Jeffrey Pennington Overparameterization Improves Robustness to Covariate Shift in High DimensionsNeurIPS 2021 (moderator: Leena)

08.12.2021: no meeting (NeurIPS)

01.12.2021: James B. Simon, Madeline Dickens, Michael R. DeWeese, Neural Tangent Kernel Eigenvalues Accurately Predict GeneralizationI(moderator: Maha)

24.11.2021 (Note: one hour earlier then usual: 9:30 - 10:30!): Reinhard Heckel, Fatih Furkan Yilmaz. Early Stopping in Deep Networks: Double Descent and How to Eliminate it. ICLR 2021. 

Reinhard Heckel has agreed join the discussion. A short presentation of the paper is available here

17.11.2021: Jeffrey Negrea, Gintare Karolina Dziugaite, Daniel M. Roy In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors  (moderator: Pascal)

10.11.2021: Zehua Lai, Lek-Heng Lim, Ke Ye Simpler Grassmannian optimization (Section 1-4, without proofs)

03.11.2021: Tripuraneni, Jordan, Jin. On the Theory of Transfer Learning: The Importance of Task Diversity.NeurIPS 2020 (moderator: Debarghya)

27.10.2021: Dominik Janzing. Causal Regularization (moderator: Leena)

20.10.2021: Donhauser et al. Interpolation can hurt robust generalization even when there is no noise. arXiv (moderator: Maha)

13.10.2021: Prasad Cheema, Mahito Sugiyama. Double Descent Risk and Volume Saturation Effects: A Geometric Perspective (moderator: Pascal)

06.10.2021: No meeting

29.09.2021: A. Radhakrishnan, M. Belkin, C. Uhler. Overparameterized neural networks implement associative memory

24.09.2021: Nil Ayday will present Bachelor thesis on "Improvement on Incremental Spectral Clustering"

17.09.2021: Mikhail Belkin Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation (remaining part)

10.09.2021: Mikhail Belkin Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation (till Sec 3)

06.08.2021. Discussion on Critical points and learning dynamics for linear autoencoders: Arnu Pretorius, Steve Kroon, Herman Kamper: Learning Dynamics of Linear Denoising Autoencoders Daniel Kunin, Jonathan M. Bloom, Aleksandrina Goeva, Cotton Seed: Loss Landscapes of Regularized Linear Autoencoders Andrew M. Saxe, James L. McClelland, Surya Ganguli: Exact solutions to the nonlinear dynamics of learning in deep linear neural networks Xuchan Bao, James Lucas, Sushant Sachdeva, Roger Grosse: Regularized linear autoencoders recover the principal components, eventually

23.07.2021. Paper by Agustinus Kristiadi,  Matthias Hein,  Philipp Hennig: Learnable Uncertainty under Laplace Approximations

16.07.2021. Discussing on Robustness. Papers: Ali Shafahi, Ronny Huang, Christoph Studer, Soheil Feizi & Tom Goldstein: Are adversarial examples inevitable?, Jeremy Cohen, Elan Rosenfeld, J. Zico Kolter: Certified Adversarial Robustness via Randomized Smoothing, Alexander Levine and Soheil Feizi: Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation , Cassidy Laidlaw, Sahil Singla, Soheil Feizi: Perceptual Adversarial Robustness: Defense Against Unseen Thread Models

09.07.2021. Paper by Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković: Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges

21.04.2021 Paper by Ravid Schwartz-Ziv and Naftali Tishby: Opening the black box of Deep Neural Networks via Information

05.05.2021 Paper by Maria Refinetti, Sebastian Goldt, Florent Krzakala and Lenka Zdeborova: Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed

16.12.2020: Debarghya will give talk on "Machine learning on comparison based data"

09.12.2020: Paper: Frost et al. ExKMC: Expanding Explainable k-Means Clustering. arXiv (first 12 pages)

02.12.2020: Paper: Wu et al. Simplifying Graph Convolutional Networks ICML 2019 (moderated by Mahalakshmi)

25.11.2020: Paper: Poggio et al. Theoretical issues in deep networks. PNAS 2020

18.11.2020: Paper: Verma, Zhang. Stability and Generalization of Graph Convolutional Neural Networks. KDD 2019 (moderated by Pascal)

11.11.2020: Paper: Biau et al. Some theoretical properties of GANs. Annals of Statistics 48(3), 2020

04.11.2020: Mahalakshmi Sabanayagam will present her Guided Research project on "Consistency of Clustering and Two-sample Testing of Graphons"

28.10.2020: Paper: Kügelgen et al. Semi-supervised learning, causality, and the conditional cluster assumption. UAI 2020

21.10.2020: Paper: Ghorbani et al. When Do Neural Networks Outperform Kernel Methods? arxiv 2020

14.10.2020: Demir Senturk will present his Bachelor thesis on "Empirical analysis of Graph Neural Networks"

07.10.2020: No meeting (workshop at MPP on sampling and clustering)

30.09.2020: Paper: Theisen et al. Good linear classifiers are abundant in the interpolating regime. arxiv 2020

23.09.2020: Paper: Chamon, Ribeiro. Probably Approximately Correct Constrained Learning. arXiv 2020

26.08.2020-16.09.2020: Break

19.08.2020: Paper+Talk: Vankadara, Ghoshdastidar. On the optimality of kernels for high-dimensional clustering. AISTATS 2020

12.08.2020: Paper: Baldin, Berthet. Statistical and Computational Rates in Graph Logistic Regression. AISTATS 2020

05.08.2020: Paper: Meehan, Chaudhuri, Dasgupta. A Three Sample Hypothesis Test for Evaluating Generative Models AISTATS 2020

29.07.2020: Talk by Mengyue Liu on AHNG: Representation learning on attributed heterogeneous network. Information Fusion, 2019

21.07.2020: Parul Bhalla will present her Masters thesis on "Prediction of IT Incident Tickets using Machine Learning and Time Series Forecasting"

15.07.2020: Kirchler et al. Two-sample testing using deep learning. AISTATS 2020

08.07.2020: no meeting

01.07.2020: Simon S. Du, Kangcheng Hou, Barnabás Póczos, Ruslan Salakhutdinov, Ruosong Wang, Keyulu Xu, Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels NeurIPS 2019

24.06.2020: In this week there will be final presentations for the master seminar 'Theoretical advances in deep learning'. Even tho not in the same timeslot you are invated to join the presentations. You can find the times and papers as well as the link to the online meeting here

17.06.2020: Zilong Tan, Samuel Yeom, Matt Fredrikson, Ameet Talwalkar Learning Fair Representations for Kernel Models

10.06.2020: (Invited talks from ICML workshop 'Theoretical Physics for Deep Learning') Andrea Montanari, Linearized two-layers neural networks in high dimension and Sanjeev Arora, Is Optimization a sufficient language to understand Deep Learning? (watch videos before meeting; we discuss slides/talk during meeting)

03.06.2020: no meeting

27.05.2020: Vaishnavh Nagarajan, J. Zico Kolter. Uniform convergence may be unable to explain generalization in deep learning

20.05.2020: Gregory Naitzat, Andrey Zhitnikov, Lek-Heng Lim. Topology of deep neural networks

13.05.2020: Chatterjee. A deterministic theory of low rank matrix completion arXiv:1910.01079v2 (We will read till page 7)

06.05.2020: Yang et al. Breaking the Softmax Bottleneck: A High-Rank RNN Language Model. ICLR 2018 (We will read till page 5)

29.04.2020: Abbara, Aubin, Krzakala, Zdeborová. Rademacher complexity and spin glasses: A link between the replica and statistical theories of learning. arXiv:1912.02729

22.04.2020: Ma, Belkin. Diving into the shallows: a computational perspective on large-scale shallow learning. NIPS 2017

15.04.2020: Hajek, Sankagiri. Community Recovery in a Preferential Attachment Graph. IEEE Transactions on Information Theory, 2019.

11.03.2020: Pengfei Zhou, Tianyi Li, Pan Zhang Phase transitions and optimal algorithms for semi-supervised classifications on graphs: from belief propagation to graph convolution network

03.03.2020: Qi Liu, Maximilian Nickel and Douwe Kiela Hyperbolic Graph Neural Networks (NeurIPS 2019) and Ines Chami, Rex Ying, Christopher Ré, Jure Leskovec Hyperbolic Graph Convolutional Neural Networks

26.02.2020: Lelarge, Miolane. Asymptotic Bayes risk for Gaussian mixture in a semi-supervised setting. arXiv:1907.03792.

19.02.2020: A convergence analysis of gradient descent for deep linear neural networks. Sanjeev Arora, Nadav Cohen, Noah Golowich, Wei Hu. ICLR 2019

12.02.2020: Golovnev, Pál, Szörényi. The Information-Theoretic Value of Unlabeled Data in Semi-Supervised Learning. ICML 2019

05.02.2020: Feldman. Does learning require memorization? arxiv 2019. 

29.01.2020: Mukherjee, Sarkar, Wang. When random initializations help: a study of variational inference for community detection. arxiv 2019. (We will read first 12 pages)

22.01.2020: Hastie, Montanari, Rosset, Tibshirani. Surprises in High-Dimensional Ridgeless Least Squares Interpolation. arxiv 2019. (We will read first 16 pages)

15.01.2020: Cai, Liang, Rakhlin. Inference via Message Passing on Partially Labeled Stochastic Block Models. arXiv 2016. (We will read first 18 pages)

08.01.2020: Ke, Honorio. Information-theoretic Limits for Community Detection in Network Models. Neurips 2018.