Compare Papers

Paper 1

Adaptive Aborting Schemes for Quantum Error Correction Decoding

Sanidhay Bhambay, Prakash Murali, Neil Walton, Thirupathaiah Vasantam

Year
2026
Journal
arXiv preprint
DOI
arXiv:2602.16929
arXiv
2602.16929

Quantum error correction (QEC) is essential for realizing fault-tolerant quantum computation. Current QEC controllers execute all scheduled syndrome (parity-bit) measurement rounds before decoding, even when early syndrome data indicates that the run will result in an error. The resulting excess measurements increase the decoder's workload and system latency. To address this, we introduce an adaptive abort module that simultaneously reduces decoder overhead and suppresses logical error rates in surface codes and color codes under an existing QEC controller. The key idea is that initial syndrome information allows the controller to terminate risky shots early before additional resources are spent. An effective scheme balances the cost of further measurement against the restart cost and thus increases decoder efficiency. Adaptive abort schemes dynamically adjust the number of syndrome measurement rounds per shot using real-time syndrome information. We consider three schemes: fixed-depth (FD) decoding (the standard non-adaptive approach used in current state-of-the-art QEC controllers), and two adaptive schemes, AdAbort and One-Step Lookahead (OSLA) decoding. For surface and color codes under a realistic circuit-level depolarizing noise model, AdAbort substantially outperforms both OSLA and FD, yielding higher decoder efficiency across a broad range of code distances. Numerically, as the code distance increases from 5 to 15, AdAbort yields an improvement that increases from 5% to 35% for surface codes and from 7% to 60% for color codes. To our knowledge, these are the first adaptive abort schemes considered for QEC. Our results highlight the potential importance of abort rules for increasing efficiency as we scale to large, resource-intensive quantum architectures.

Open paper

Paper 2

Tradeoffs on the volume of fault-tolerant circuits

Anirudh Krishna, Gilles Zémor

Year
2025
Journal
arXiv preprint
DOI
arXiv:2510.03057
arXiv
2510.03057

Dating back to the seminal work of von Neumann [von Neumann, Automata Studies, 1956], it is known that error correcting codes can overcome faulty circuit components to enable robust computation. Choosing an appropriate code is non-trivial as it must balance several requirements. Increasing the rate of the code reduces the relative number of redundant bits used in the fault-tolerant circuit, while increasing the distance of the code ensures robustness against faults. If the rate and distance were the only concerns, we could use asymptotically optimal codes as is done in communication settings. However, choosing a code for computation is challenging due to an additional requirement: The code needs to facilitate accessibility of encoded information to enable computation on encoded data. This seems to conflict with having large rate and distance. We prove that this is indeed the case, namely that a code family cannot simultaneously have constant rate, growing distance and short-depth gadgets to perform encoded CNOT gates. As a consequence, achieving good rate and distance may necessarily entail accepting very deep circuits, an undesirable trade-off in certain architectures and applications.

Open paper