Compare Papers
Paper 1
Clifford Hierarchy Stabilizer Codes: Transversal Non-Clifford Gates and Magic
Ryohei Kobayashi, Guanyu Zhu, Po-Shen Hsin
- Year
- 2025
- Journal
- arXiv preprint
- DOI
- arXiv:2511.02900
- arXiv
- 2511.02900
A fundamental problem in fault-tolerant quantum computation is the tradeoff between universality and dimensionality, exemplified by the the Bravyi-König bound for $n$-dimensional topological stabilizer codes. In this work, we extend topological Pauli stabilizer codes to a broad class of $n$-dimensional Clifford hierarchy stabilizer codes. These codes correspond to the $(n+1)$D Dijkgraaf-Witten gauge theories with non-Abelian topological order. We construct transversal non-Clifford gates through automorphism symmetries represented by cup products. In 2D, we obtain the first transversal non-Clifford logical gates including T and CS for Clifford stabilizer codes, using the automorphism of the twisted $\mathbb{Z}_2^3$ gauge theory (equivalent to $\mathbb{D}_4$ topological order). We also combine it with the just-in-time decoder to fault-tolerantly prepare the logical T magic state in $O(d)$ rounds via code switching. In 3D, we construct a transversal logical $\sqrt{\text{T}}$ gate in a non-Clifford stabilizer code at the third level of the Clifford hierarchy, located on a tetrahedron corresponding to a twisted $\mathbb{Z}_2^4$ gauge theory. Due to the potential single-shot code-switching properties of these codes, one could achieve the 4th level of Clifford hierarchy with an $O(d^3)$ space-time overhead, avoiding the tradeoff observed in 2D. We propose a conjecture extending the Bravyi-König bound to Clifford hierarchy stabilizer codes, with our explicit constructions surpassing the Bravyi-König bound for achieving the logical gates in the $(n+1)$-th level of Clifford hierarchy in $n$ spatial dimension.
Open paperPaper 2
Toward Uncertainty-Aware and Generalizable Neural Decoding for Quantum LDPC Codes
Xiangjun Mi, Frank Mueller
- Year
- 2025
- Journal
- arXiv preprint
- DOI
- arXiv:2510.06257
- arXiv
- 2510.06257
Quantum error correction (QEC) is essential for scalable quantum computing, yet decoding errors via conventional algorithms result in limited accuracy (i.e., suppression of logical errors) and high overheads, both of which can be alleviated by inference-based decoders. To date, such machine-learning (ML) decoders lack two key properties crucial for practical fault tolerance: reliable uncertainty quantification and robust generalization to previously unseen codes. To address this gap, we propose \textbf{QuBA}, a Bayesian graph neural decoder that integrates attention to both dot-product and multi-head, enabling expressive error-pattern recognition alongside calibrated uncertainty estimates. Building on QuBA, we further develop \textbf{SAGU }\textbf{(Sequential Aggregate Generalization under Uncertainty)}, a multi-code training framework with enhanced cross-domain robustness enabling decoding beyond the training set. Experiments on bivariate bicycle (BB) codes and their coprime variants demonstrate that (i) both QuBA and SAGU consistently outperform the classical baseline belief propagation (BP), achieving a reduction of on average \emph{one order of magnitude} in logical error rate (LER), and up to \emph{two orders of magnitude} under confident-decision bounds on the coprime BB code $[[154, 6, 16]]$; (ii) QuBA also surpasses state-of-the-art neural decoders, providing an advantage of roughly \emph{one order of magnitude} (e.g., for the larger BB code $[[756, 16, \leq34]]$) even when considering conservative (safe) decision bounds; (iii) SAGU achieves decoding performance comparable to or even outperforming QuBA's domain-specific training approach.
Open paper