Compare Papers

Paper 1

Floquetifying stabiliser codes with distance-preserving rewrites

Benjamin Rodatz, Boldizsár Poór, Aleks Kissinger

Year
2024
Journal
arXiv preprint
DOI
arXiv:2410.17240
arXiv
2410.17240

Stabiliser codes with large weight measurements can be challenging to implement fault-tolerantly. To overcome this, we propose a Floquetification procedure which, given a stabiliser code, synthesises a novel Floquet code that only uses single- and two-qubit operations. Moreover, this procedure preserves the distance and number of logicals of the original code. The new Floquet code requires additional physical qubits. This overhead is linear in the weight of the largest measurement of the original code. Our method is based on the ZX calculus, a graphical language for representing and rewriting quantum circuits. However, a problem arises with the use of ZX in the context of rewriting error-correcting codes: ZX rewrites generally do not preserve code distance. Tackling this issue, we define the notion of distance-preserving rewrite that enables the transformation of error-correcting codes without changing their distance. These distance-preserving rewrites are used to decompose arbitrary weight stabiliser measurements into quantum circuits with single- and two-qubit operations. As we only use distance-preserving rewrites, we are guaranteed that a single error in the resulting circuit creates at most a single error on the data qubits. These decompositions enable us to generalise the Floquetification procedure of [arXiv:2307.11136] to arbitrary stabiliser codes, provably preserving the distance and number of logicals of the original code.

Open paper

Paper 2

Lottery BP: Unlocking Quantum Error Decoding at Scale

Yanzhang Zhu, Chen-Yu Peng, Yun Hao Chen, Yeong-Luh Ueng, Di Wu

Year
2026
Journal
arXiv preprint
DOI
arXiv:2605.00038
arXiv
2605.00038

To enable fault tolerance on millions of qubits in real time, scalable decoding is necessary, which motivates this paper. Existing decoding algorithms (decoders), such as clustering, matching, belief propagation (BP), and neural networks, suffer from one or more of inaccuracy, costliness, and incompatibility, upon a broad set of quantum error correction codes, such as surface code, toric code, and bivariate bicycle code. Therefore, there exists a gap between existing decoders and an ideal decoder that is accurate, fast, general, and scalable simultaneously. This paper contributes in three aspects, including decoder, decoder architecture, and decoding simulator. First, we propose Lottery BP, a decoder that introduces randomness during decoding. Lottery BP improves the decoding accuracy over BP by 2~8 orders of magnitude for topological codes. To efficiently decode multi-round measurement errors, we propose syndrome vote as a pre-processing step before Lottery BP, which compresses multiple rounds of syndromes into one. Syndrome vote increases the latency margin of decoding and mitigates the backlog problem. Second, we design a PolyQec architecture that implements Lottery BP as a local decoder and ordered statistics decoding (OSD) as a global decoder, and it is configurable for surface/toric code and X/Z check. Since Lottery BP boosts the local decoding accuracy, PolyQec invokes the costly global OSD decoder less frequently over BP+OSD to enhance the scalability, e.g., 3~5 orders of magnitude less for topological codes. Third, to evaluate decoders fairly, we develop a PyTorch-based decoding simulator, Syndrilla, that modularizes the simulation pipeline and allows to extend new decoders flexibly. We formulate multiple metrics to quantify the performance of decoders and integrate them in Syndrilla. Running on GPUs, Syndrilla is 1~2 orders of magnitude faster than CPUs.

Open paper