Compare Papers
Paper 1
Beam search decoder for quantum LDPC codes
Min Ye, Dave Wecker, Nicolas Delfosse
- Year
- 2025
- Journal
- arXiv preprint
- DOI
- arXiv:2512.07057
- arXiv
- 2512.07057
We propose a decoder for quantum low density parity check (LDPC) codes based on a beam search heuristic guided by belief propagation (BP). Our beam search decoder applies to all quantum LDPC codes and achieves different speed-accuracy tradeoffs by tuning its parameters such as the beam width. We perform numerical simulations under circuit level noise for the $[[144, 12, 12]]$ bivariate bicycle (BB) code at noise rate $p=10^{-3}$ to estimate the logical error rate and the 99.9 percentile runtime and we compare with the BP-OSD decoder which has been the default quantum LDPC decoder for the past six years. A variant of our beam search decoder with a beam width of 64 achieves a $17\times$ reduction in logical error rate. With a beam width of 8, we reach the same logical error rate as BP-OSD with a $26.2\times$ reduction in the 99.9 percentile runtime. We identify the beam search decoder with beam width of 32 as a promising candidate for trapped ion architectures because it achieves a $5.6\times$ reduction in logical error rate with a 99.9 percentile runtime per syndrome extraction round below 1ms at $p=5 \times10^{-4}$. Remarkably, this is achieved in software on a single core, without any parallelization or specialized hardware (FPGA, ASIC), suggesting one might only need three 32-core CPUs to decode a trapped ion quantum computer with 1000 logical qubits.
Open paperPaper 2
Tradeoffs on the volume of fault-tolerant circuits
Anirudh Krishna, Gilles Zémor
- Year
- 2025
- Journal
- arXiv preprint
- DOI
- arXiv:2510.03057
- arXiv
- 2510.03057
Dating back to the seminal work of von Neumann [von Neumann, Automata Studies, 1956], it is known that error correcting codes can overcome faulty circuit components to enable robust computation. Choosing an appropriate code is non-trivial as it must balance several requirements. Increasing the rate of the code reduces the relative number of redundant bits used in the fault-tolerant circuit, while increasing the distance of the code ensures robustness against faults. If the rate and distance were the only concerns, we could use asymptotically optimal codes as is done in communication settings. However, choosing a code for computation is challenging due to an additional requirement: The code needs to facilitate accessibility of encoded information to enable computation on encoded data. This seems to conflict with having large rate and distance. We prove that this is indeed the case, namely that a code family cannot simultaneously have constant rate, growing distance and short-depth gadgets to perform encoded CNOT gates. As a consequence, achieving good rate and distance may necessarily entail accepting very deep circuits, an undesirable trade-off in certain architectures and applications.
Open paper