Compare Papers
Paper 1
Real-space RG, error correction and Petz map
Keiichiro Furuya, Nima Lashkari, Shoy Ouseph
- Year
- 2020
- Journal
- arXiv preprint
- DOI
- arXiv:2012.14001
- arXiv
- 2012.14001
There are two parts to this work: First, we study the error correction properties of the real-space renormalization group (RG). The long-distance operators are the (approximately) correctable operators encoded in the physical algebra of short-distance operators. This is closely related to modeling the holographic map as a quantum error correction code. As opposed to holography, the real-space RG of a many-body quantum system does not have the complementary recovery property. We discuss the role of large $N$ and a large gap in the spectrum of operators in the emergence of complementary recovery. Second, we study the operator algebra exact quantum error correction for any von Neumann algebra. We show that similar to the finite dimensional case, for any error map in between von Neumann algebras the Petz dual of the error map is a recovery map if the inclusion of the correctable subalgebra of operators has finite index.
Open paperPaper 2
Toward Uncertainty-Aware and Generalizable Neural Decoding for Quantum LDPC Codes
Xiangjun Mi, Frank Mueller
- Year
- 2025
- Journal
- arXiv preprint
- DOI
- arXiv:2510.06257
- arXiv
- 2510.06257
Quantum error correction (QEC) is essential for scalable quantum computing, yet decoding errors via conventional algorithms result in limited accuracy (i.e., suppression of logical errors) and high overheads, both of which can be alleviated by inference-based decoders. To date, such machine-learning (ML) decoders lack two key properties crucial for practical fault tolerance: reliable uncertainty quantification and robust generalization to previously unseen codes. To address this gap, we propose \textbf{QuBA}, a Bayesian graph neural decoder that integrates attention to both dot-product and multi-head, enabling expressive error-pattern recognition alongside calibrated uncertainty estimates. Building on QuBA, we further develop \textbf{SAGU }\textbf{(Sequential Aggregate Generalization under Uncertainty)}, a multi-code training framework with enhanced cross-domain robustness enabling decoding beyond the training set. Experiments on bivariate bicycle (BB) codes and their coprime variants demonstrate that (i) both QuBA and SAGU consistently outperform the classical baseline belief propagation (BP), achieving a reduction of on average \emph{one order of magnitude} in logical error rate (LER), and up to \emph{two orders of magnitude} under confident-decision bounds on the coprime BB code $[[154, 6, 16]]$; (ii) QuBA also surpasses state-of-the-art neural decoders, providing an advantage of roughly \emph{one order of magnitude} (e.g., for the larger BB code $[[756, 16, \leq34]]$) even when considering conservative (safe) decision bounds; (iii) SAGU achieves decoding performance comparable to or even outperforming QuBA's domain-specific training approach.
Open paper