Compare Papers
Paper 1
Scalable Postselection of Quantum Resources
J. Wilson Staples, Winston Fu, Jeff D. Thompson
- Year
- 2026
- Journal
- arXiv preprint
- DOI
- arXiv:2603.08697
- arXiv
- 2603.08697
The large overhead imposed by quantum error correction is a critical challenge to the realization of quantum computers, and motivates searching for alternative error correcting codes and fault-tolerant circuit constructions. Postselection is a powerful tool that builds large programs out of probabilistically generated sub-circuits, and has been shown to increase the threshold of quantum error correction based on fusing fixed-size resource states or concatenated codes. In this work, we present an approach to lower the overhead of quantum computing using scalable postselection, based on directly postselecting sub-circuits with a size extensive in the code distance using decoder soft information. We introduce a metric, the partial gap, that estimates what the logical gap of a resource state will be after it is consumed, and show that postselection based on the partial gap leads to scalable improvements in the logical error rate. In the specific context of implementing logical gates via teleportation through a cluster state, we demonstrate that scalable postselection provides a $4\times$ reduction in the overhead per logical gate, at the same logical error probability.
Open paperPaper 2
Lottery BP: Unlocking Quantum Error Decoding at Scale
Yanzhang Zhu, Chen-Yu Peng, Yun Hao Chen, Yeong-Luh Ueng, Di Wu
- Year
- 2026
- Journal
- arXiv preprint
- DOI
- arXiv:2605.00038
- arXiv
- 2605.00038
To enable fault tolerance on millions of qubits in real time, scalable decoding is necessary, which motivates this paper. Existing decoding algorithms (decoders), such as clustering, matching, belief propagation (BP), and neural networks, suffer from one or more of inaccuracy, costliness, and incompatibility, upon a broad set of quantum error correction codes, such as surface code, toric code, and bivariate bicycle code. Therefore, there exists a gap between existing decoders and an ideal decoder that is accurate, fast, general, and scalable simultaneously. This paper contributes in three aspects, including decoder, decoder architecture, and decoding simulator. First, we propose Lottery BP, a decoder that introduces randomness during decoding. Lottery BP improves the decoding accuracy over BP by 2~8 orders of magnitude for topological codes. To efficiently decode multi-round measurement errors, we propose syndrome vote as a pre-processing step before Lottery BP, which compresses multiple rounds of syndromes into one. Syndrome vote increases the latency margin of decoding and mitigates the backlog problem. Second, we design a PolyQec architecture that implements Lottery BP as a local decoder and ordered statistics decoding (OSD) as a global decoder, and it is configurable for surface/toric code and X/Z check. Since Lottery BP boosts the local decoding accuracy, PolyQec invokes the costly global OSD decoder less frequently over BP+OSD to enhance the scalability, e.g., 3~5 orders of magnitude less for topological codes. Third, to evaluate decoders fairly, we develop a PyTorch-based decoding simulator, Syndrilla, that modularizes the simulation pipeline and allows to extend new decoders flexibly. We formulate multiple metrics to quantify the performance of decoders and integrate them in Syndrilla. Running on GPUs, Syndrilla is 1~2 orders of magnitude faster than CPUs.
Open paper