Compare Papers

Paper 1

A Fault-Tolerant Honeycomb Memory

Craig Gidney, Michael Newman, Austin Fowler, Michael Broughton

Year
2021
Journal
Quantum
DOI
10.22331/q-2021-12-20-605
arXiv
-

Recently, Hastings & Haah introduced a quantum memory defined on the honeycomb lattice. Remarkably, this honeycomb code assembles weight-six parity checks using only two-local measurements. The sparse connectivity and two-local measurements are desirable features for certain hardware, while the weight-six parity checks enable robust performance in the circuit model.In this work, we quantify the robustness of logical qubits preserved by the honeycomb code using a correlated minimum-weight perfect-matching decoder. Using Monte Carlo sampling, we estimate the honeycomb code's threshold in different error models, and project how efficiently it can reach the "teraquop regime" where trillions of quantum logical operations can be executed reliably. We perform the same estimates for the rotated surface code, and find a threshold of 0.2%−0.3% for the honeycomb code compared to a threshold of 0.5%−0.7% for the surface code in a controlled-not circuit model. In a circuit model with native two-body measurements, the honeycomb code achieves a threshold of 1.5%<p<2.0%, where p is the collective error rate of the two-body measurement gate - including both measurement and correlated data depolarization error processes. With such gates at a physical error rate of 10−3, we project that the honeycomb code can reach the teraquop regime with only 600 physical qubits.

Open paper

Paper 2

Decoder Switching: Breaking the Speed-Accuracy Tradeoff in Real-Time Quantum Error Correction

Riki Toshio, Kaito Kishi, Jun Fujisaki, Hirotaka Oshima, Shintaro Sato, Keisuke Fujii

Year
2025
Journal
arXiv preprint
DOI
arXiv:2510.25222
arXiv
2510.25222

The realization of fault-tolerant quantum computers hinges on the construction of high-speed, high-accuracy, real-time decoding systems. The persistent challenge lies in the fundamental trade-off between speed and accuracy: efforts to improve the decoder's accuracy often lead to unacceptable increases in decoding time and hardware complexity, while attempts to accelerate decoding result in a significant degradation in logical error rate. To overcome this challenge, we propose a novel framework, decoder switching, which balances these competing demands by combining a faster, soft-output decoder ("weak decoder") with a slower, high-accuracy decoder ("strong decoder"). In usual rounds, the weak decoder processes error syndromes and simultaneously evaluates its reliability via soft information. Only when encountering a decoding window with low reliability do we switch to the strong decoder to achieve more accurate decoding. Numerical simulations suggest that this framework can achieve accuracy comparable to, or even surpassing, that of the strong decoder, while maintaining an average decoding time on par with the weak decoder. We also develop an online decoding scheme tailored to our framework, named double window decoding, and elucidate the criteria for preventing an exponential slowdown of quantum computation. These findings break the long-standing speed-accuracy trade-off, paving the way for scalable real-time decoding devices.

Open paper