Previous talks at the SCCS Colloquium

Nino Mumladze: Solving Eigenproblems with Neural Networks

SCCS Colloquium |


A lot of problems that arise in machine learning or pattern recognition boil down to solving eigenproblems. Tasks such as dimensionality reduction (PCA, Fisher's discriminant), Clustering (Spectral clustering) or data representation (Laplacian, Hessian eigenmaps or diffusion maps) are all based on calculating principal eigenvectors and eigenvalues. There are various approaches for finding spectral decomposition of a matrix. Due to the complexity of the problem, as finding the roots of a characteristic polynomial of a matrix becomes computationally infeasible in higher dimensions, there are only special cases where calculating eigenvalues exactly is possible in a finite number of steps. In general, algorithms for finding eigenvalues and eigenvectors are iterative, such as the power method, inverse method, Rayleigh quotient method, QR method and so on. As the sizes of matrices in the industry gets bigger and bigger, it becomes increasingly important to solve eigenproblems as efficiently as possible, and come up with a method that is fast, accurate and feasible even for large amount of data. Recently, there have been proposed non-linear neural network based approaches for this problem, and they have shown to be realistic architecture for solving high speed linear algebraic systems. With this paper, we are going to tackle eigenproblems with Artificial Neural Networks (ANN) and compare the results with standard solvers, in terms of accuracy, efficiency, time and memory requirement, etc.

Master's thesis presentation. Nino is advised by Felix Dietrich.