Master's thesis presentation. Ahmed is advised by Dr. Marta Mauri, Dr. Jacob Miller, Dr. Brian Dellabetta and Prof. Dr. Christian Mendl,
Previous talks at the SCCS Colloquium
Ahmed Darwish: A Parametric Method for Pretraining Parameterized Quantum Circuits using Tensor Networks
SCCS Colloquium |
Tensor networks are going through a phase of revival in the scientific community, finding applications in the machine learning area, as well as renewed applications in physics, particularly in the subfield quantum computing.
They are characterized by being highly-tunable in the sense of the quality of their approximations, which has led people to experiment with their ability to act as stand shoulder for quantum models, which may suffer from a plethora of issues when their parameters are not initialized properly.
This connection is incentivized by the need for more robust quantum models that are able to compete with the ever-more powerful classical hardware.
To this end, work has been done to initialize the parameters of a quantum circuit by first training a tensor network onto the task at hand and then "exporting" these parameters to a quantum circuit, continuing the optimization of the parameters in the quantum setting.
However, the works in the literature were either limited in their applicability to a wide range of applications, or did not provide a general enough formulation that would allow for researchers to appropriate them to their use case.
Auto-differentiation has been proposed as one of the techniques that could be used to make this conversion from a tensor network to a quantum circuit more straight-forward and use case-agnostic.
We therefore investigated this approach more deeply in this work, examining its ability to convert different topologies of tensor networks trained on different tasks, and the hyperparameters impacting the effectiveness of this approach the most.
We also examine the benefits achieved by pre-training quantum generative models by comparing them to randomly initialized models.
We find a generally positive trend in the usage of this pre-training technique for generative modeling with respect to the generalization ability of the resulting quantum models, with some caveats that are consistent with existing findings in the literature.