Compare Papers

Paper 1

Decoder Performance in Hybrid CV-Discrete Surface-Code Threshold Estimation Using LiDMaS+

Dennis Delali Kwesi Wayo, Chinonso Onah, Vladimir Milchakov, Leonardo Goliatt, Sven Groppe

Year
2026
Journal
arXiv preprint
DOI
arXiv:2603.06730
arXiv
2603.06730

Threshold estimation is central to fault-tolerant quantum computing, but the reported threshold depends not only on the code and noise model, but also on the decoder used to interpret syndrome data. We study this dependence for surface-code threshold estimation under both a standard Pauli noise model and a hybrid continuous-variable/discrete model motivated by GKP-style digitization. Using LiDMaS+ as a common experimental platform, we compare minimum-weight perfect matching (MWPM) and Union-Find under matched sweep grids, matched distances, and deterministic seeding, and we additionally evaluate trained neural-guided MWPM in the hybrid regime. In the Pauli baseline at distance $d=5$, MWPM consistently outperforms Union-Find, reducing the mean sampled logical error rate from $0.384$ to $0.260$, and producing a stable threshold summary with crossing median $p_c \approx 0.053$. In the hybrid fixed-distance run, Union-Find is substantially worse than MWPM (mean LER $0.1657$ versus $0.1195$), while trained neural-guided MWPM tracks MWPM closely (mean LER $0.1158$). Across hybrid multi-distance sweeps, the distance-dependent reversal in logical-error ordering remains visible, but the grid-based crossing estimator still returns boundary-valued $σ_c=0.05$ for all decoders. Neural-guided runs also show elevated decoder-failure diagnostics at high noise ($\max$ decoder-failure rate $0.1335$ at $d=7,σ=0.60$), indicating that learned guidance quality and decoder robustness must be reported alongside threshold curves. These results show that decoder choice and estimator design both materially affect threshold inference.

Open paper

Paper 2

A comprehensive survey on quantum computer usage: How many qubits are employed for what purposes?

Tsubasa Ichikawa, Hideaki Hakoshima, Koji Inui, Kosuke Ito, Ryo Matsuda, Kosuke Mitarai, Koichi Miyamoto, Wataru Mizukami, Kaoru Mizuta, Toshio Mori, Yuichiro Nakano, Akimoto Nakayama, Ken N. Okada, Takanori Sugimoto, Souichi Takahira, Nayuta Takemori, Satoyuki Tsukano, Hiroshi Ueda, Ryo Watanabe, Yuichiro Yoshida, Keisuke Fujii

Year
2023
Journal
arXiv preprint
DOI
arXiv:2307.16130
arXiv
2307.16130

Quantum computers (QCs), which work based on the law of quantum mechanics, are expected to be faster than classical computers in several computational tasks such as prime factoring and simulation of quantum many-body systems. In the last decade, research and development of QCs have rapidly advanced. Now hundreds of physical qubits are at our disposal, and one can find several remarkable experiments actually outperforming the classical computer in a specific computational task. On the other hand, it is unclear what the typical usages of the QCs are. Here we conduct an extensive survey on the papers that are posted in the quant-ph section in arXiv and claim to have used QCs in their abstracts. To understand the current situation of the research and development of the QCs, we evaluated the descriptive statistics about the papers, including the number of qubits employed, QPU vendors, application domains and so on. Our survey shows that the annual number of publications is increasing, and the typical number of qubits employed is about six to ten, growing along with the increase in the quantum volume (QV). Most of the preprints are devoted to applications such as quantum machine learning, condensed matter physics, and quantum chemistry, while quantum error correction and quantum noise mitigation use more qubits than the other topics. These imply that the increase in QV is fundamentally relevant, and more experiments for quantum error correction, and noise mitigation using shallow circuits with more qubits will take place.

Open paper