Compare Papers

Paper 1

Low-valency scalable quantum error correction with a dynamic compass code

Jun Zen, Xanda C. Kolesnikow, Campbell K. McLauchlan, Georgia M. Nixon, Thomas R. Scruby, Seok-Hyung Lee, Stephen D. Bartlett, Benjamin J. Brown, Robin Harper

Year
2026
Journal
arXiv preprint
DOI
arXiv:2604.14299
arXiv
2604.14299

The ongoing development of hardware that is capable of reliably executing general quantum algorithms requires quantum error-correcting codes that are both practical for realisation and rapidly reduce logical error rates as they are scaled up. Here we introduce the dynamic compass code, a code that can be implemented with a modest footprint on the heavy-hex lattice while also demonstrating a threshold. The dynamic code is obtained by choosing a novel measurement schedule for the syndrome extraction circuit of the heavy-hex subsystem code. We numerically evaluate its performance and observe that different choices of schedule can provide a trade-off in protection against logical errors in the $X$ vs $Z$ basis. We also demonstrate that this new measurement schedule provides the code with a threshold for stability experiments. We finally show how the dynamic compass code could be used for fault-tolerant logic by illustrating lattice surgery between code patches.

Open paper

Paper 2

Toward Uncertainty-Aware and Generalizable Neural Decoding for Quantum LDPC Codes

Xiangjun Mi, Frank Mueller

Year
2025
Journal
arXiv preprint
DOI
arXiv:2510.06257
arXiv
2510.06257

Quantum error correction (QEC) is essential for scalable quantum computing, yet decoding errors via conventional algorithms result in limited accuracy (i.e., suppression of logical errors) and high overheads, both of which can be alleviated by inference-based decoders. To date, such machine-learning (ML) decoders lack two key properties crucial for practical fault tolerance: reliable uncertainty quantification and robust generalization to previously unseen codes. To address this gap, we propose \textbf{QuBA}, a Bayesian graph neural decoder that integrates attention to both dot-product and multi-head, enabling expressive error-pattern recognition alongside calibrated uncertainty estimates. Building on QuBA, we further develop \textbf{SAGU }\textbf{(Sequential Aggregate Generalization under Uncertainty)}, a multi-code training framework with enhanced cross-domain robustness enabling decoding beyond the training set. Experiments on bivariate bicycle (BB) codes and their coprime variants demonstrate that (i) both QuBA and SAGU consistently outperform the classical baseline belief propagation (BP), achieving a reduction of on average \emph{one order of magnitude} in logical error rate (LER), and up to \emph{two orders of magnitude} under confident-decision bounds on the coprime BB code $[[154, 6, 16]]$; (ii) QuBA also surpasses state-of-the-art neural decoders, providing an advantage of roughly \emph{one order of magnitude} (e.g., for the larger BB code $[[756, 16, \leq34]]$) even when considering conservative (safe) decision bounds; (iii) SAGU achieves decoding performance comparable to or even outperforming QuBA's domain-specific training approach.

Open paper