Compare Papers

Paper 1

Approximate maximum likelihood decoding with $K$ minimum weight matchings

Mao Lin

Year
2025
Journal
arXiv preprint
DOI
arXiv:2510.06531
arXiv
2510.06531

The minimum weight matching (MWM) and maximum likelihood decoding (MLD) are two widely used and distinct decoding strategies for quantum error correction. For a given syndrome, the MWM decoder finds the most probable physical error corresponding to the MWM of the decoding graph, whereas MLD aims to find the most probable logical error. Although MLD is the optimal error correction strategy, it is typically more computationally expensive compared to the MWM decoder. In this work, we introduce an algorithm that approximates MLD with $K$ MWMs from the decoding graph. Taking the surface code subject to graphlike errors as an example, we show that it is possible to efficiently find the first $K$ MWMs by systematically modifying the original decoding graph followed by finding the MWMs of the modified graphs. For the case where the $X$ and $Z$ errors are correlated, despite the MWM of the decoding hypergraph cannot be found efficiently, we present a heuristic approach to approximate the MLD by finding the $K$ MWMs in the $X$ and $Z$ subgraphs. We benchmark the efficacy of our algorithm for the surface code subject to graphlike errors, the surface-square Gottesman-Kitaev-Preskill (GKP) code and surface-hexagonal GKP code subject to the Gaussian random displacement errors, showing that the fidelity approaches that of the exact MLD (for the first two cases) or the tensor-network decoder (for the last case) as $K$ increases.

Open paper

Paper 2

Toward Uncertainty-Aware and Generalizable Neural Decoding for Quantum LDPC Codes

Xiangjun Mi, Frank Mueller

Year
2025
Journal
arXiv preprint
DOI
arXiv:2510.06257
arXiv
2510.06257

Quantum error correction (QEC) is essential for scalable quantum computing, yet decoding errors via conventional algorithms result in limited accuracy (i.e., suppression of logical errors) and high overheads, both of which can be alleviated by inference-based decoders. To date, such machine-learning (ML) decoders lack two key properties crucial for practical fault tolerance: reliable uncertainty quantification and robust generalization to previously unseen codes. To address this gap, we propose \textbf{QuBA}, a Bayesian graph neural decoder that integrates attention to both dot-product and multi-head, enabling expressive error-pattern recognition alongside calibrated uncertainty estimates. Building on QuBA, we further develop \textbf{SAGU }\textbf{(Sequential Aggregate Generalization under Uncertainty)}, a multi-code training framework with enhanced cross-domain robustness enabling decoding beyond the training set. Experiments on bivariate bicycle (BB) codes and their coprime variants demonstrate that (i) both QuBA and SAGU consistently outperform the classical baseline belief propagation (BP), achieving a reduction of on average \emph{one order of magnitude} in logical error rate (LER), and up to \emph{two orders of magnitude} under confident-decision bounds on the coprime BB code $[[154, 6, 16]]$; (ii) QuBA also surpasses state-of-the-art neural decoders, providing an advantage of roughly \emph{one order of magnitude} (e.g., for the larger BB code $[[756, 16, \leq34]]$) even when considering conservative (safe) decision bounds; (iii) SAGU achieves decoding performance comparable to or even outperforming QuBA's domain-specific training approach.

Open paper