Previous talks at the SCCS Colloquium

Simon Blöchinger: Implementation of a Deep Sparse Grid Layer in PyTorch

SCCS Colloquium |


Sparse grids are useful for function approximation in high dimensions, because they reduce the impact of the "curse of dimensionality", as the number of grid points does not grow exponentially with the number of dimensions. This gives access to function approximation in higher dimensions than possible with full grids. Sparse grids represent functions as a linear combination of nonlinear basis functions. 

A neural network can also represent a function as a linear combination of nonlinear basis functions. In contrast to sparse grids, however, the neural network learns the linear combination.

A combination of sparse grids and neural networks could lead to a more efficient use of resources and faster training of the neural network. Since there currently is a lack of frameworks that allow using sparse grids inside of neural networks, an implementation of a deep sparse grid layer is introduced in this thesis. The implementation uses the Python machine learning library PyTorch. With this implementation, it is possible for future researchers to start evaluating the advantages and disadvantages of sparse grids inside neural networks quicker. It provides a customizable sparse grid layer, which uses a sparse grid built with the combination technique, and can utilize PyTorch's parallel tensor computation on compatible GPUs.

Bachelor's thesis submission talk. Simon is advised by Dr. Felix Dietrich.