Speaker
Description
I will present a general framework for incorporating degrees of freedom into a tensor network (i.e. bond expansion), with applications for DMRG, TDVP, and other algorithms. Our approach makes use of reduced rank singular value decompositions, such that all operations required for the bond expansion have computational complexity that is at most quadratic in the bond dimension $D$ and linear in the local Hilbert space dimension $d$, so much cheaper than other components of DMRG that scale as $D^3$. The 'pre-expansion' approach interpolates between single-site and 2-site DMRG, giving convergence similar to 2-site DMRG but otherwise identical performance to single-site DMRG. 'Post-expansion' is a successor to the single-site subspace-expansion (3S) algorithm for models with long-range interactions, with better convergence properties and easier to control. These algorithms perform better than conventional DMRG in all cases, but are especially useful for models where the local Hilbert space dimension is large, such as bosonic degrees of freedom.