Compare Papers

Paper 1

Artificial Intelligence for Quantum Error Correction: A Comprehensive Review

Zihao Wang, Hao Tang

Year
2024
Journal
arXiv preprint
DOI
arXiv:2412.20380
arXiv
2412.20380

Quantum Error Correction (QEC) is the process of detecting and correcting errors in quantum systems, which are prone to decoherence and quantum noise. QEC is crucial for developing stable and highly accurate quantum computing systems, therefore, several research efforts have been made to develop the best QEC strategy. Recently, Google's breakthrough shows great potential to improve the accuracy of the existing error correction methods. This survey provides a comprehensive review of advancements in the use of artificial intelligence (AI) tools to enhance QEC schemes for existing Noisy Intermediate Scale Quantum (NISQ) systems. Specifically, we focus on machine learning (ML) strategies and span from unsupervised, supervised, semi-supervised, to reinforcement learning methods. It is clear from the evidence, that these methods have recently shown superior efficiency and accuracy in the QEC pipeline compared to conventional approaches. Our review covers more than 150 relevant studies, offering a comprehensive overview of progress and perspective in this field. We organized the reviewed literature on the basis of the AI strategies employed and improvements in error correction performance. We also discuss challenges ahead such as data sparsity caused by limited quantum error datasets and scalability issues as the number of quantum bits (qubits) in quantum systems kept increasing very fast. We conclude the paper with summary of existing works and future research directions aimed at deeper integration of AI techniques into QEC strategies.

Open paper

Paper 2

Tradeoffs on the volume of fault-tolerant circuits

Anirudh Krishna, Gilles Zémor

Year
2025
Journal
arXiv preprint
DOI
arXiv:2510.03057
arXiv
2510.03057

Dating back to the seminal work of von Neumann [von Neumann, Automata Studies, 1956], it is known that error correcting codes can overcome faulty circuit components to enable robust computation. Choosing an appropriate code is non-trivial as it must balance several requirements. Increasing the rate of the code reduces the relative number of redundant bits used in the fault-tolerant circuit, while increasing the distance of the code ensures robustness against faults. If the rate and distance were the only concerns, we could use asymptotically optimal codes as is done in communication settings. However, choosing a code for computation is challenging due to an additional requirement: The code needs to facilitate accessibility of encoded information to enable computation on encoded data. This seems to conflict with having large rate and distance. We prove that this is indeed the case, namely that a code family cannot simultaneously have constant rate, growing distance and short-depth gadgets to perform encoded CNOT gates. As a consequence, achieving good rate and distance may necessarily entail accepting very deep circuits, an undesirable trade-off in certain architectures and applications.

Open paper