Previous talks at the SCCS Colloquium

Safa Sadiq: Sampling weights and biases in Autoencoders

SCCS Colloquium |


Autoencoders are an unsupervised learning technique which leverage representation learning to reconstruct the original data as closely as possible. Traditionally, an autoencoder network is trained to minimize the reconstruction loss between the training data and the data generated as the output of the network, using iterative gradient based methods. However, studies have been conducted around neural networks with sampled weights and biases, instead of iteratively trained ones. The weights and biases are sampled either using a data agnostic or data driven approach, which need not be updated during the training process, resulting in much smaller training times. The aim of the thesis is to extend the use of sampled neural networks to an autoencoder setting. However simply sampling points from the input space does not necessarily consider the intrinsic structure of the data manifold. Hence, kernel representation learning using a contrastive loss function is used in combination with data driven sampling to learn an embedding which can be used to accurately reconstruct the data. The approach is tested on benchmark datasets including MNIST and CIFAR-10.

Master's thesis presentation. Safa is advised by Prof. Dr. Felix Dietrich.