Speaker
Description
Quantum error mitigation methods are designed to eliminate the effect of noise in quantum computation by introducing a trade-off between bias and variance, using modified quantum circuits and classical postprocessing. While various techniques have been proposed with their own advantages and disadvantages, there is still no universal criterion to choose the best method for a given application. In this talk, we focus on the sample complexity, or the measurement overhead, to perform unbiased quantum error mitigation, and discuss the performance bounds and their achievability. We show that, based on quantum estimation theory, the overhead generally grows exponentially with the circuit depth and also with the number of qubits in scrambling quantum circuits. We then show that the bounds on the overhead are provably tight under white noise, and that a simple rescaling technique achieves cost-optimality. Based on numerical simulations, we argue that a wide class of unital and nonunital noise are converted into white noise under sufficiently deep scrambling quantum circuits. This implies that, our findings become increasingly important when the error rate is reduced by hardware advancement or implementation of error correction. In this context, we also also discuss how to suppress algorithmic errors in a cost-optimal way under the framework of fault-tolerant quantum computing.