Conveners
Invited talks: Ji-Yao Chen
- Wei-Lin Tu (Keio University)
Invited talks: Garnet Kin-Lic Chan
- Wei-Lin Tu (Keio University)
Invited talks: Pochung Chen
- Yi-Ping Huang (National Tsing Hua University)
Invited talks: Naoki Kawashima
- Tsuyoshi Okubo (Univ. of Tokyo)
Invited talks: Ian McCulloch
- Tsuyoshi Okubo (Univ. of Tokyo)
Invited talks: Tao Xiang
- Chia-Min Chung (NSYSU)
Invited talks: Hiroshi Shinaoka
- Chia-Min Chung (NSYSU)
Invited talks: Ying-Jer Kao
- Yi-Ping Huang (National Tsing Hua University)
Invited talks: Lei Wang
- Yusuke Nomura (Keio University)
Invited talks: Thomas Ayral
- Yusuke Nomura
Invited talks: Mingpu Qin
- Wei-Lin Tu (Keio University)
Invited talks: Didier Poilblanc
- Wei-Lin Tu (Keio University)
Invited talks: Keisuke Fujii
- Satoshi Morita (Keio University)
Invited talks: Miles Stoudenmire
- Satoshi Morita (Keio University)
Tensor networks provide a way to systematically study not only ground states but also excitation spectrum of quantum many-body systems. When dealing with the latter, diagrammatic summation would typically arise. In this talk, I will describe the idea of using generating functions to solve this problem in the context of both matrix product states and projected entangled-pair states. As an...
In this talk, we discuss how to perform tensor network based finite-size scaling analysis for 2D classical models. We first use HOTRG to renormalize the weight tensor of the partition function, then we use renormalized tensor to construct the approximated transfer matrix of an infinite strip of finite width. By diagonalizing the transfer matrix we obtain physical quantities such as the...
I review several attempts at effective compression of tensor networks includeing relatively new finidings of our own. Tensor network compression is a key technology in a few entirely different contexts. First, many of massive data sets, collection of images, customer preferences, etc., are naturally viewed as big tensors. In recognizing, correcting and compressing such data sets, decomposition...
I will present a general framework for incorporating degrees of freedom into a tensor network (i.e. bond expansion), with applications for DMRG, TDVP, and other algorithms. Our approach makes use of reduced rank singular value decompositions, such that all operations required for the bond expansion have computational complexity that is at most quadratic in the bond dimension $D$ and linear in...
The tensor-network renormalization group is known for its profound implications for understanding and solving correlated quantum systems. I will explore sophisticated tensor-network techniques for assessing dynamical excitations in low-dimensional quantum lattice models:
1. Introduce a matrix-product representation for low-energy excited states and
present methods for their precise...
Tensor networks are a powerful tool for compressing wave functions and density matrices of quantum systems in physics. Recent developments have shown that tensor network techniques can efficiently compress many functions beyond these traditional objects. Notable examples include the solutions to turbulence in Navier–Stokes equations [1] and the computation of Feynman diagrams [2,3]. These...
In this talk, I will introduce a hybrid model combining a quantum-inspired tensor network (TN) and a variational quantum circuit (VQC) to perform supervised and reinforcement learning tasks. This architecture allows for the classical and quantum parts of the model to be trained simultaneously, providing an end-to-end training framework. We show that compared to the principal component...
We introduce CrystalFormer, a transformer-based autoregressive model specifically designed for space group-controlled generation of crystalline materials. The space group symmetry significantly simplifies the crystal space, which is crucial for data and compute efficient generative modeling of crystalline materials. Leveraging the prominent discrete and sequential nature of the Wyckoff...
I will introduce a new tensor network states ansatz called Fully-augmented Matrix Product States (FAMPS), in which MPS is augmented with disentanglers to encode area-law-like entanglement entropy (entanglement entropy supported in FAMPS scales as $l$ for an $l$ × $l$ system). I will discuss the optimization algorithm of FAMPS in the study of 2D quantum system. With FAMPS, we reexamine the...
Quantum machine learning, promising for quantum computers, explores implicit models utilizing quantum kernel methods or explicit models, known as quantum circuit learning. While implicit models often yield lower training errors, they face linear prediction time scaling with data size, potentially overfitting. Explicit models predict in constant time but encounter challenges with optimization,...