Compare Papers
Paper 1
Improved decoding algorithms for surface codes under independent bit-flip and phase-flip errors
Louay Bazzi
- Year
- 2026
- Journal
- arXiv preprint
- DOI
- arXiv:2601.00972
- arXiv
- 2601.00972
We study exact decoding for the toric code and for planar and rotated surface codes under the standard independent \(X/Z\) noise model, focusing on Separate Minimum Weight (SMW) decoding and Separate Most Likely Coset (SMLC) decoding. For the SMW decoding problem, we show that an \(O(n^{3/2}\log n)\)-time decoder is achievable for surface and toric codes, improving over the \(O(n^{3}\log n)\) worst-case time of the standard approach based on complete decoding graphs. Our approach is based on a local reduction of SMW decoding to the minimum weight perfect matching problem using Fisher gadgets, which preserves planarity for planar and rotated surface codes and genus~\(1\) for the toric code. This reduction enables the use of Lipton--Tarjan planar separator methods and implies that SMW decoding lies in \(\mathrm{NC}\). For SMLC decoding, we show that the planar surface code admits an exact decoder with \(O(n^{3/2})\) algebraic complexity and that the problem lies in \(\mathrm{NC}\), improving over the \(O(n^{2})\) algebraic complexity of Bravyi \emph{et al.} Our approach proceeds via a dual-cycle formulation of coset probabilities and an explicit reduction to planar Pfaffian evaluation using Fisher--Kasteleyn--Temperley constructions. The same complexity measures apply to SMLC decoding of the rotated surface code. For the toric code, we obtain an exact polynomial-time SMLC decoder with \(O(n^{3})\) algebraic complexity. In addition, while the SMLC formulation is motivated by connections to statistical mechanics, we provide a purely algebraic derivation of the underlying duality based on MacWilliams duality and Fourier analysis. Finally, we discuss extensions of the framework to the depolarizing noise model and identify resulting open problems.
Open paperPaper 2
Tradeoffs on the volume of fault-tolerant circuits
Anirudh Krishna, Gilles Zémor
- Year
- 2025
- Journal
- arXiv preprint
- DOI
- arXiv:2510.03057
- arXiv
- 2510.03057
Dating back to the seminal work of von Neumann [von Neumann, Automata Studies, 1956], it is known that error correcting codes can overcome faulty circuit components to enable robust computation. Choosing an appropriate code is non-trivial as it must balance several requirements. Increasing the rate of the code reduces the relative number of redundant bits used in the fault-tolerant circuit, while increasing the distance of the code ensures robustness against faults. If the rate and distance were the only concerns, we could use asymptotically optimal codes as is done in communication settings. However, choosing a code for computation is challenging due to an additional requirement: The code needs to facilitate accessibility of encoded information to enable computation on encoded data. This seems to conflict with having large rate and distance. We prove that this is indeed the case, namely that a code family cannot simultaneously have constant rate, growing distance and short-depth gadgets to perform encoded CNOT gates. As a consequence, achieving good rate and distance may necessarily entail accepting very deep circuits, an undesirable trade-off in certain architectures and applications.
Open paper