Compare Papers

Paper 1

On the Capacity of Distributed Quantum Storage

Hua Sun, Syed A. Jafar

Year
2025
Journal
arXiv preprint
DOI
arXiv:2510.10568
arXiv
2510.10568

A distributed quantum storage code maps a quantum message to N storage nodes, of arbitrary specified sizes, such that the stored message is robust to an arbitrary specified set of erasure patterns. The sizes of the storage nodes, and erasure patterns may not be homogeneous. The capacity of distributed quantum storage is the maximum feasible size of the quantum message (relative to the sizes of the storage nodes), when the scaling of the size of the message and all storage nodes by the same scaling factor is allowed. Representing the decoding sets as hyperedges in a storage graph, the capacity is characterized for various graphs, including MDS graph, wheel graph, Fano graph, and intersection graph. The achievability is related via quantum CSS codes to a classical secure storage problem. Remarkably, our coding schemes utilize non-trivial alignment structures to ensure recovery and security in the corresponding classical secure storage problem, which leads to similarly non-trivial quantum codes. The converse is based on quantum information inequalities, e.g., strong sub-additivity and weak monotonicity of quantum entropy, tailored to the topology of the storage graphs.

Open paper

Paper 2

Toward Uncertainty-Aware and Generalizable Neural Decoding for Quantum LDPC Codes

Xiangjun Mi, Frank Mueller

Year
2025
Journal
arXiv preprint
DOI
arXiv:2510.06257
arXiv
2510.06257

Quantum error correction (QEC) is essential for scalable quantum computing, yet decoding errors via conventional algorithms result in limited accuracy (i.e., suppression of logical errors) and high overheads, both of which can be alleviated by inference-based decoders. To date, such machine-learning (ML) decoders lack two key properties crucial for practical fault tolerance: reliable uncertainty quantification and robust generalization to previously unseen codes. To address this gap, we propose \textbf{QuBA}, a Bayesian graph neural decoder that integrates attention to both dot-product and multi-head, enabling expressive error-pattern recognition alongside calibrated uncertainty estimates. Building on QuBA, we further develop \textbf{SAGU }\textbf{(Sequential Aggregate Generalization under Uncertainty)}, a multi-code training framework with enhanced cross-domain robustness enabling decoding beyond the training set. Experiments on bivariate bicycle (BB) codes and their coprime variants demonstrate that (i) both QuBA and SAGU consistently outperform the classical baseline belief propagation (BP), achieving a reduction of on average \emph{one order of magnitude} in logical error rate (LER), and up to \emph{two orders of magnitude} under confident-decision bounds on the coprime BB code $[[154, 6, 16]]$; (ii) QuBA also surpasses state-of-the-art neural decoders, providing an advantage of roughly \emph{one order of magnitude} (e.g., for the larger BB code $[[756, 16, \leq34]]$) even when considering conservative (safe) decision bounds; (iii) SAGU achieves decoding performance comparable to or even outperforming QuBA's domain-specific training approach.

Open paper