Speaker
Description
I review several attempts at effective compression of tensor networks includeing relatively new finidings of our own. Tensor network compression is a key technology in a few entirely different contexts. First, many of massive data sets, collection of images, customer preferences, etc., are naturally viewed as big tensors. In recognizing, correcting and compressing such data sets, decomposition into a network of smaller tensors is a promising candidate. We then encounter the need for optimizing/compressing the tensor network. Second, in the statistical mechanics, the real-space renormalization group method a la Kadanoff is recently re-formulated in terms of tensor networks, and produces highly accurate predictions on the critical information not only a first few relevant scaling dimensions but also higher ones together with operator-product-expansion coefficients. There, the renormalization mapping can be viewed as a compression of the tensor networks. We recently found that the same technique is useful in both contexts.