Compare Papers

Paper 1

Macromux: scalable postselection for high-threshold fault-tolerant quantum computation

Patrick Birchall, Jacob Bridgeman, Christopher Dawson, Terry Farrelly, Yehua Liu, Naomi Nickerson, Mihir Pant, Sam Roberts, Karthik Seetharam, David Tuckett

Year
2026
Journal
arXiv preprint
DOI
arXiv:2603.04875
arXiv
2603.04875

We introduce a new resource-efficient scheme for fault-tolerant quantum computation known as `macroscale multiplexing' (or simply `Macromux'), that utilizes scalable postselection to significantly improve the threshold of a given fault-tolerant protocol against both Pauli and erasure errors. Macromux is a hierarchical method for postselecting on constant-size space-time windows of a fault tolerant protocol, requiring only constant additional overheads. The method can be straightforwardly implemented for any fault-tolerant protocol and in any architecture that has access to routing and memory, such as linear-optical fusion-based architectures. We construct fault-tolerant protocols that, to our knowledge, have the highest thresholds in the literature; we perform simulations of fusion-based schemes based on the surface code, showing a maximum possible increase in Pauli thresholds of up to a factor of $\sim6$ (from $1.0\%$ to $5.9\%$). Our schemes are highly-resource efficient, and can for example, double the loss thresholds of some photonic fusion-based protocols using as little as $3 \times$ overhead.

Open paper

Paper 2

Tradeoffs on the volume of fault-tolerant circuits

Anirudh Krishna, Gilles Zémor

Year
2025
Journal
arXiv preprint
DOI
arXiv:2510.03057
arXiv
2510.03057

Dating back to the seminal work of von Neumann [von Neumann, Automata Studies, 1956], it is known that error correcting codes can overcome faulty circuit components to enable robust computation. Choosing an appropriate code is non-trivial as it must balance several requirements. Increasing the rate of the code reduces the relative number of redundant bits used in the fault-tolerant circuit, while increasing the distance of the code ensures robustness against faults. If the rate and distance were the only concerns, we could use asymptotically optimal codes as is done in communication settings. However, choosing a code for computation is challenging due to an additional requirement: The code needs to facilitate accessibility of encoded information to enable computation on encoded data. This seems to conflict with having large rate and distance. We prove that this is indeed the case, namely that a code family cannot simultaneously have constant rate, growing distance and short-depth gadgets to perform encoded CNOT gates. As a consequence, achieving good rate and distance may necessarily entail accepting very deep circuits, an undesirable trade-off in certain architectures and applications.

Open paper