Compare Papers

Paper 1

Fault-tolerant modular quantum computing with surface codes using single-shot emission-based hardware

Siddhant Singh, Rikiya Kashiwagi, Kazufumi Tanji, Wojciech Roga, Daniel Bhatti, Masahiro Takeoka, David Elkouss

Year
2026
Journal
arXiv preprint
DOI
arXiv:2601.07241
arXiv
2601.07241

Fault-tolerant modular quantum computing requires stabilizer measurements across the modules in a quantum network. For this, entangled states of high quality and rate must be distributed. Currently, two main types of entanglement distribution protocols exist, namely emission-based and scattering-based, each with its own advantages and drawbacks. On the one hand, scattering-based protocols with cavities or waveguides are fast but demand stringent hardware such as high-efficiency integrated circulators or strong waveguide coupling. On the other hand, emission-based platforms are experimentally feasible but so far rely on Bell-pair fusion with extensive use of slow two-qubit memory gates, limiting thresholds to $\approx 0.16\%$. Here, we consider a fully distributed surface code using emission-based entanglement schemes that generate GHZ states in a single shot, i.e., without the need for Bell-pair fusions. We show that our optical setup produces Bell pairs, W states, and GHZ states, enabling both memory-based and optical protocols for distilling high-fidelity GHZ states with significantly improved success rates. Furthermore, we introduce protocols that completely eliminate the need for memory-based two-qubit gates, achieving thresholds of $\approx 0.19\%$ with modest hardware enhancements, increasing to above $\approx 0.24\%$ with photon-number-resolving detectors. These results show the feasibility of emission-based architectures for scalable fault-tolerant operation.

Open paper

Paper 2

Tradeoffs on the volume of fault-tolerant circuits

Anirudh Krishna, Gilles Zémor

Year
2025
Journal
arXiv preprint
DOI
arXiv:2510.03057
arXiv
2510.03057

Dating back to the seminal work of von Neumann [von Neumann, Automata Studies, 1956], it is known that error correcting codes can overcome faulty circuit components to enable robust computation. Choosing an appropriate code is non-trivial as it must balance several requirements. Increasing the rate of the code reduces the relative number of redundant bits used in the fault-tolerant circuit, while increasing the distance of the code ensures robustness against faults. If the rate and distance were the only concerns, we could use asymptotically optimal codes as is done in communication settings. However, choosing a code for computation is challenging due to an additional requirement: The code needs to facilitate accessibility of encoded information to enable computation on encoded data. This seems to conflict with having large rate and distance. We prove that this is indeed the case, namely that a code family cannot simultaneously have constant rate, growing distance and short-depth gadgets to perform encoded CNOT gates. As a consequence, achieving good rate and distance may necessarily entail accepting very deep circuits, an undesirable trade-off in certain architectures and applications.

Open paper