The threshold theorem is a seminal result in the field of quantum computing asserting that arbitrarily long quantum computations can be performed on a faulty quantum computer provided that the noise level is below some constant threshold. This remarkable result comes at the price of increasing the number of qubits (quantum bits) by a large factor that scales polylogarithmically with the size of the quantum computation we wish to realize. Minimizing the space overhead for fault-tolerant quantum computation is a pressing challenge that is crucial to benefit from the computational potential of quantum devices.
In this paper, we study the asymptotic scaling of the space overhead needed for fault-tolerant quantum computation. We show that the polylogarithmic factor in the standard threshold theorem is in fact not needed and that there is a fault-tolerant construction that uses a number of qubits that is only a constant factor more than the number of qubits of the ideal computation. This result was conjectured by Gottesman who suggested to replace the concatenated codes from the standard threshold theorem by quantum error-correcting codes with a constant encoding rate. The main challenge was then to find an appropriate family of quantum codes together with an efficient classical decoding algorithm working even with a noisy syndrome. The efficiency constraint is crucial here: bear in mind that qubits are inherently noisy and that faults keep accumulating during the decoding process. The role of the decoder is therefore to keep the number of errors under control during the whole computation.
On a technical level, our main contribution is the analysis of the SMALL-SET-FLIP decoding algorithm applied to the family of quantum expander codes. We show that it can be parallelized to run in constant time while correcting sufficiently many errors on both the qubits and the syndrome to keep the error under control. These tools can be seen as a quantum generalization of the BIT-FLIP algorithm applied to the (classical) expander codes of Sipser and Spielman.
Quantum computers are expected to offer significant, sometimes exponential, speedups compared to classical computers. For this reason, building a large, universal quantum computer is a central objective of modern science. Despite two decades of effort, experimental progress has been somewhat slow and the largest computers available at the moment reach a few tens of physical qubits, still quite far from the numbers necessary to run "interesting" algorithms. A major source of difficulty is the extreme fragility of quantum information: storing a qubit is very challenging, but processing quantum information even more so.
Any physical implementation of a quantum computer is unavoidably imperfect because qubits are subject to decoherence and physical gates can only be approximately realized. In order to compute the outcome of an ideal circuit C using imperfect qubits and gates, the idea is to transform C into another circuit C', which gives the same outcome with high probability, even if its components are noisy. It is common to refer to the gates or wires of the circuit C as logical gates or wires and to those of C' as the physical ones.
No entries found