Compare Papers
Paper 1
Q3DE: A fault-tolerant quantum computer architecture for multi-bit burst errors by cosmic rays
Yasunari Suzuki, Takanori Sugiyama, Tomochika Arai, Wang Liao, Koji Inoue, Teruo Tanimoto
- Year
- 2024
- Journal
- arXiv preprint
- DOI
- arXiv:2501.00331
- arXiv
- 2501.00331
Demonstrating small error rates by integrating quantum error correction (QEC) into an architecture of quantum computing is the next milestone towards scalable fault-tolerant quantum computing (FTQC). Encoding logical qubits with superconducting qubits and surface codes is considered a promising candidate for FTQC architectures. In this paper, we propose an FTQC architecture, which we call Q3DE, that enhances the tolerance to multi-bit burst errors (MBBEs) by cosmic rays with moderate changes and overhead. There are three core components in Q3DE: in-situ anomaly DEtection, dynamic code DEformation, and optimized error DEcoding. In this architecture, MBBEs are detected only from syndrome values for error correction. The effect of MBBEs is immediately mitigated by dynamically increasing the encoding level of logical qubits and re-estimating probable recovery operation with the rollback of the decoding process. We investigate the performance and overhead of the Q3DE architecture with quantum-error simulators and demonstrate that Q3DE effectively reduces the period of MBBEs by 1000 times and halves the size of their region. Therefore, Q3DE significantly relaxes the requirement of qubit density and qubit chip size to realize FTQC. Our scheme is versatile for mitigating MBBEs, i.e., temporal variations of error properties, on a wide range of physical devices and FTQC architectures since it relies only on the standard features of topological stabilizer codes.
Open paperPaper 2
Tradeoffs on the volume of fault-tolerant circuits
Anirudh Krishna, Gilles Zémor
- Year
- 2025
- Journal
- arXiv preprint
- DOI
- arXiv:2510.03057
- arXiv
- 2510.03057
Dating back to the seminal work of von Neumann [von Neumann, Automata Studies, 1956], it is known that error correcting codes can overcome faulty circuit components to enable robust computation. Choosing an appropriate code is non-trivial as it must balance several requirements. Increasing the rate of the code reduces the relative number of redundant bits used in the fault-tolerant circuit, while increasing the distance of the code ensures robustness against faults. If the rate and distance were the only concerns, we could use asymptotically optimal codes as is done in communication settings. However, choosing a code for computation is challenging due to an additional requirement: The code needs to facilitate accessibility of encoded information to enable computation on encoded data. This seems to conflict with having large rate and distance. We prove that this is indeed the case, namely that a code family cannot simultaneously have constant rate, growing distance and short-depth gadgets to perform encoded CNOT gates. As a consequence, achieving good rate and distance may necessarily entail accepting very deep circuits, an undesirable trade-off in certain architectures and applications.
Open paper