Previous talks at the SCCS Colloquium

Iremnur Kidil: Neural Networks Solving Linear Systems

SCCS Colloquium |


Solving linear systems is a fundamental part of engineering and computational science problems since it is relied on in many fields such as numerical simulations, image and signal processing, and fluid dynamics. The linear equations can be collectively represented as Ax = b, a system consisting of a matrix A, a solution x, and data b. In the problem, matrix A and data b are given, and we need to find x. In this work, we explore two different computation methods for solving linear systems, firstly using a linear solver, secondly implementing a neural network, and then we compare the results. As a linear solver we use the method numpy.linalg.ltsq from NumPy linear algebra functions, which rely on BLAS and LAPACK to calculate approximations of x. It uses a regression procedure named least-squares to determine the best fit line to a given dataset and returns least-squares solution x to our linear matrix equation. The second method we use for solving linear systems, we build a neural network, which represents a nonlinear function, train it with data b, and we expect to find x as the output. As an advantage nonlinear networks can be trained on batches of data (subsets of the rows A and b), meaning as an input giving the full matrix A and b is not necessary as opposed to linear solvers. After the direct solution of linear systems with neural networks, we discuss and explore special settings in which using a neural network to solve a linear system is beneficial to the standard solvers.

Bachelor's Thesis Submission Talk (Informatics). Iremnur is advised by Dr. Felix Dietrich.