Compare Papers
Paper 1
A Concatenated Dual Displacement Code for Continuous-Variable Quantum Error Correction
Fucheng Guo, Frank Mueller, Yuan Liu
- Year
- 2025
- Journal
- arXiv preprint
- DOI
- arXiv:2512.00481
- arXiv
- 2512.00481
The continuous-variable (CV) Gaussian no-go theorem fundamentally limits the suppression of Gaussian displacement errors using only Gaussian gates and states. Prior studies have employed Gottesman-Kitaev-Preskill (GKP) states as ancillary qumodes to suppress small Gaussian displacement errors, but when the displacement magnitude becomes large, lattice-crossing events arise beyond the correctable range of the GKP state. To address this issue, we concatenate a Gaussian-noise-suppression circuit with an outer analog Steane code that corrects such occasional lattice-crossing events as well as other abrupt displacement errors. Unlike conventional concatenation, which primarily aims to reduce logical error rates, the Steane-GKP duality in encoding provides complementary protection against both large and small displacement errors, enabling CV error correction within the continuous encoding space and contrasting with earlier approaches that concatenate GKP states with repetition codes for discrete qubit or qudit encodings. Analytical results show that, under infinite squeezing, the concatenated code suppresses the variance of Gaussian displacement errors across all qumodes by up to 50 percent while enabling unbiased correction of lattice-crossing events, with a success probability determined by the ratio between the residual Gaussian error standard deviation and the lattice-crossing magnitude. Even with finite squeezing, the proposed architecture continues to provide Gaussian-error suppression together with lattice-crossing correction, and the presence of the outer analog Steane code relaxes the squeezing requirement of the inner GKP states, indicating near-term experimental feasibility. This work establishes a viable route toward fault-tolerant continuous-variable quantum computation and provides new insight into the design of concatenated CV error-correcting architectures.
Open paperPaper 2
Toward Uncertainty-Aware and Generalizable Neural Decoding for Quantum LDPC Codes
Xiangjun Mi, Frank Mueller
- Year
- 2025
- Journal
- arXiv preprint
- DOI
- arXiv:2510.06257
- arXiv
- 2510.06257
Quantum error correction (QEC) is essential for scalable quantum computing, yet decoding errors via conventional algorithms result in limited accuracy (i.e., suppression of logical errors) and high overheads, both of which can be alleviated by inference-based decoders. To date, such machine-learning (ML) decoders lack two key properties crucial for practical fault tolerance: reliable uncertainty quantification and robust generalization to previously unseen codes. To address this gap, we propose \textbf{QuBA}, a Bayesian graph neural decoder that integrates attention to both dot-product and multi-head, enabling expressive error-pattern recognition alongside calibrated uncertainty estimates. Building on QuBA, we further develop \textbf{SAGU }\textbf{(Sequential Aggregate Generalization under Uncertainty)}, a multi-code training framework with enhanced cross-domain robustness enabling decoding beyond the training set. Experiments on bivariate bicycle (BB) codes and their coprime variants demonstrate that (i) both QuBA and SAGU consistently outperform the classical baseline belief propagation (BP), achieving a reduction of on average \emph{one order of magnitude} in logical error rate (LER), and up to \emph{two orders of magnitude} under confident-decision bounds on the coprime BB code $[[154, 6, 16]]$; (ii) QuBA also surpasses state-of-the-art neural decoders, providing an advantage of roughly \emph{one order of magnitude} (e.g., for the larger BB code $[[756, 16, \leq34]]$) even when considering conservative (safe) decision bounds; (iii) SAGU achieves decoding performance comparable to or even outperforming QuBA's domain-specific training approach.
Open paper