Speaker
Description
The Variational Quantum Algorithm[1], VQA, is an algorithm to obtain a desirable quantum state by repeatedly updating parameters in its circuit. While it is analogous to the gradient-based machine learning, it has been a promissing algorithm that works on current Noisy Intermediate-Scale Quantum devices (NISQ)[2].
However, VQA training process has a significant problem, called barren plateau[3, 4].
In the VQA step, circuit parameters are updated based on the gradients of the target cost function with respect to the circuit parameters.
If the value of the gradients is too small, the algorithm cannot determine how to update the parameters, since any adjustment to the parameters would result in little improvement to the cost function.
It is shown that the gradients of parametrized quantum circuits suffer exponential decay with the number of qubits. This decay is called barren plateau.
In addition, a research showed that the more expressive circuits have smaller gradient values[5], which implies VQA would suffer a trade-off between its performance and trainability.
Many ideas have been proposed to train quantum circuits by VQA without getting trapped in barren plateau[6, 7, 8].
Rudolph showed that using MPS to obtain initial circuit parameters of VQA is effective to avoid the barren plateau[9], and therefore boosts the training process.
A quantum state expressed as MPS is first optimized using classical computing resources. The optimized MPS is then embedded into quantum circuits and used as the initial parameter sets of the VQA steps.
Our approach is to extend this algorithm to tree tensor networks (TTN).
It receives Hamiltonian as an input. Then it finds near-optimal structure of TTN using structural optimization[10, 11].
This step generates any form of binary TTN, including MPS, enhancing its expressibility. The optimized TTN is then embedded into a quantum circuit that consists of layers of gates. Each layer has the same structure as the original TTN[12]. We propose an embedding algorithm with a calculation cost that is linear to the number of qubits.
[1] M. Cerezo et al., Variational quantum algorithms, Nature Reviews Physics 3, 625 (2021).
[2] J. Preskill, Quantum computing in the NISQ era and beyond, Quantum 2, 79 (2018).
[3] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, Barren plateaus in quantum neural network training landscapes, Nature Communications 9, 4812 (2018).
[4] M. Cerezo, A. Sone, T. Volkoff, L. Cincio, and P. J. Coles, Cost function dependent barren plateaus in shallow parametrized quantum circuits, Nature Communications 12, 1791 (2021).
[5] Z. Holmes, K. Sharma, M. Cerezo, and P. J. Coles, Connecting ansatz expressibility to gradient magnitudes and barren plateaus (2021), arXiv Preprint arXiv:2101.02138 (n.d.).
[6] H.-Y. Liu, T.-P. Sun, Y.-C. Wu, Y.-J. Han, and G.-P. Guo, Mitigating barren plateaus with transfer-learning-inspired parameter initializations, New Journal of Physics 25, 13039 (2023).
[7] A. Skolik, J. R. McClean, M. Mohseni, P. Van Der Smagt, and M. Leib, Layerwise learning for quantum neural networks, Quantum Machine Intelligence 3, 5 (2021).
[8] L. Friedrich and J. Maziero, Avoiding barren plateaus with classical deep neural networks, Physical Review a 106, 42433 (2022).
[9] M. S. Rudolph, J. Miller, D. Motlagh, J. Chen, A. Acharya, and A. Perdomo-Ortiz, Synergistic pretraining of parametrized quantum circuits via tensor networks, Nature Communications 14, 8367 (2023).
[10] T. Hikihara, H. Ueda, K. Okunishi, K. Harada, and T. Nishino, Automatic structural optimization of tree tensor networks, Physical Review Research 5, 13031 (2023).
[11] T. Hikihara, H. Ueda, K. Okunishi, K. Harada, and T. Nishino, Improving accuracy of tree-tensor network approach by optimization of network structure, arXiv Preprint arXiv:2501.15514 (2025).
[12] S. Sugawara, K. Inomata, T. Okubo, and S. Todo, Embedding of tree tensor networks into shallow quantum circuits, arXiv Preprint arXiv:2501.18856 (2025).