Compare Papers

Paper 1

Towards low overhead magic state distillation

Anirudh Krishna, Jean-Pierre Tillich

Year
2018
Journal
arXiv preprint
DOI
arXiv:1811.08461
arXiv
1811.08461

Magic state distillation is a resource intensive sub-routine for quantum computation. The ratio of noisy input states to output states with error rate at most $ε$ scales as $O(\log^γ(1/ε))$ (Bravyi and Haah, PRA 2012). In a breakthrough paper, Hastings and Haah (PRL 2018) showed that it is possible to construct distillation routines with sub-logarithmic overhead, achieving $γ\approx 0.6779$ and falsifying a conjecture that $γ$ is lower bounded by $1$. They then ask whether $γ$ can be made arbitrarily close to $0$. We answer this question in the affirmative for magic state distillation routines using qudits ($d$ dimensional quantum systems).

Open paper

Paper 2

Toward Uncertainty-Aware and Generalizable Neural Decoding for Quantum LDPC Codes

Xiangjun Mi, Frank Mueller

Year
2025
Journal
arXiv preprint
DOI
arXiv:2510.06257
arXiv
2510.06257

Quantum error correction (QEC) is essential for scalable quantum computing, yet decoding errors via conventional algorithms result in limited accuracy (i.e., suppression of logical errors) and high overheads, both of which can be alleviated by inference-based decoders. To date, such machine-learning (ML) decoders lack two key properties crucial for practical fault tolerance: reliable uncertainty quantification and robust generalization to previously unseen codes. To address this gap, we propose \textbf{QuBA}, a Bayesian graph neural decoder that integrates attention to both dot-product and multi-head, enabling expressive error-pattern recognition alongside calibrated uncertainty estimates. Building on QuBA, we further develop \textbf{SAGU }\textbf{(Sequential Aggregate Generalization under Uncertainty)}, a multi-code training framework with enhanced cross-domain robustness enabling decoding beyond the training set. Experiments on bivariate bicycle (BB) codes and their coprime variants demonstrate that (i) both QuBA and SAGU consistently outperform the classical baseline belief propagation (BP), achieving a reduction of on average \emph{one order of magnitude} in logical error rate (LER), and up to \emph{two orders of magnitude} under confident-decision bounds on the coprime BB code $[[154, 6, 16]]$; (ii) QuBA also surpasses state-of-the-art neural decoders, providing an advantage of roughly \emph{one order of magnitude} (e.g., for the larger BB code $[[756, 16, \leq34]]$) even when considering conservative (safe) decision bounds; (iii) SAGU achieves decoding performance comparable to or even outperforming QuBA's domain-specific training approach.

Open paper