Compare Papers
Paper 1
Polylog-time- and constant-space-overhead fault-tolerant quantum computation with quantum low-density parity-check codes
Shiro Tamiya, Masato Koashi, Hayata Yamasaki
- Year
- 2024
- Journal
- arXiv preprint
- DOI
- arXiv:2411.03683
- arXiv
- 2411.03683
A major challenge in fault-tolerant quantum computation (FTQC) is to reduce both space overhead -- the large number of physical qubits per logical qubit -- and time overhead -- the long physical gate sequences per logical gate. We prove that a protocol using non-vanishing-rate quantum low-density parity-check (LDPC) codes, combined with concatenated Steane codes, achieves constant space overhead and polylogarithmic time overhead, even when accounting for non-zero classical computation time. This protocol offers an improvement over existing constant-space-overhead protocols, which have polynomial time overhead using quantum LDPC codes and quasi-polylogarithmic time overhead using concatenated quantum Hamming codes. To ensure the completeness of this proof, we develop a technique called partial circuit reduction, which enables error analysis for the entire fault-tolerant circuit by examining smaller parts composed of a few gadgets. With this technique, we resolve a previously unaddressed logical gap in the existing arguments and complete the proof of the threshold theorem for the constant-space-overhead protocol with quantum LDPC codes. Our work highlights that the quantum-LDPC-code approach can realize FTQC with a negligibly small slowdown and a bounded overhead of physical qubits, similar to the code-concatenation approach, underscoring the importance of a comprehensive comparison of the future realizability of these two approaches.
Open paperPaper 2
Lottery BP: Unlocking Quantum Error Decoding at Scale
Yanzhang Zhu, Chen-Yu Peng, Yun Hao Chen, Yeong-Luh Ueng, Di Wu
- Year
- 2026
- Journal
- arXiv preprint
- DOI
- arXiv:2605.00038
- arXiv
- 2605.00038
To enable fault tolerance on millions of qubits in real time, scalable decoding is necessary, which motivates this paper. Existing decoding algorithms (decoders), such as clustering, matching, belief propagation (BP), and neural networks, suffer from one or more of inaccuracy, costliness, and incompatibility, upon a broad set of quantum error correction codes, such as surface code, toric code, and bivariate bicycle code. Therefore, there exists a gap between existing decoders and an ideal decoder that is accurate, fast, general, and scalable simultaneously. This paper contributes in three aspects, including decoder, decoder architecture, and decoding simulator. First, we propose Lottery BP, a decoder that introduces randomness during decoding. Lottery BP improves the decoding accuracy over BP by 2~8 orders of magnitude for topological codes. To efficiently decode multi-round measurement errors, we propose syndrome vote as a pre-processing step before Lottery BP, which compresses multiple rounds of syndromes into one. Syndrome vote increases the latency margin of decoding and mitigates the backlog problem. Second, we design a PolyQec architecture that implements Lottery BP as a local decoder and ordered statistics decoding (OSD) as a global decoder, and it is configurable for surface/toric code and X/Z check. Since Lottery BP boosts the local decoding accuracy, PolyQec invokes the costly global OSD decoder less frequently over BP+OSD to enhance the scalability, e.g., 3~5 orders of magnitude less for topological codes. Third, to evaluate decoders fairly, we develop a PyTorch-based decoding simulator, Syndrilla, that modularizes the simulation pipeline and allows to extend new decoders flexibly. We formulate multiple metrics to quantify the performance of decoders and integrate them in Syndrilla. Running on GPUs, Syndrilla is 1~2 orders of magnitude faster than CPUs.
Open paper