Compare Papers
Paper 1
Fault Injection Attacks on Machine Learning-based Quantum Computer Readout Error Correction
Anthony Etim, Jakub Szefer
- Year
- 2025
- Journal
- arXiv preprint
- DOI
- arXiv:2512.20077
- arXiv
- 2512.20077
Machine-learning (ML) classifiers are increasingly used in quantum computing systems to improve multi-qubit readout discrimination and to mitigate correlated readout errors. These ML classifiers are an integral component of today's quantum computer's control and readout stacks. This paper is the first to analyze the susceptibility of such ML classifiers to physical fault-injection which can result in generation of incorrect readout results from quantum computers. The study targets 5-qubit (thus 32-class) readout error-correction model. Using the ChipWhisperer Husky for physical voltage glitching, this work leverages an automated algorithm for scanning the fault injection parameter search space to find various successful faults in all the layers of the target ML model. Across repeated trials, this work finds that fault susceptibility is strongly layer-dependent: early-layers demonstrate higher rates of misprediction when faults are triggered in them, whereas later layers have smaller misprediction rates. This work further characterizes the resulting readout failures at the bitstring level using Hamming-distance and per-bit flip statistics, showing that single-shot glitches can induce structured readout corruption rather than purely random noise. These results motivate treating ML-based quantum computer readout and readout correction as a security-critical component of quantum systems and highlight the need for lightweight, deployment-friendly fault detection and redundancy mechanisms in the quantum computer readout pipelines.
Open paperPaper 2
Computational Complexity of Learning Efficiently Generatable Pure States
Taiga Hiroka, Min-Hsiu Hsieh
- Year
- 2024
- Journal
- arXiv preprint
- DOI
- arXiv:2410.04373
- arXiv
- 2410.04373
Understanding the computational complexity of learning efficient classical programs in various learning models has been a fundamental and important question in classical computational learning theory. In this work, we study the computational complexity of quantum state learning, which can be seen as a quantum generalization of distributional learning introduced by Kearns et.al [STOC94]. Previous works by Chung and Lin [TQC21], and Bădescu and O$'$Donnell [STOC21] study the sample complexity of the quantum state learning and show that polynomial copies are sufficient if unknown quantum states are promised efficiently generatable. However, their algorithms are inefficient, and the computational complexity of this learning problem remains unresolved. In this work, we study the computational complexity of quantum state learning when the states are promised to be efficiently generatable. We show that if unknown quantum states are promised to be pure states and efficiently generateable, then there exists a quantum polynomial time algorithm $A$ and a language $L \in PP$ such that $A^L$ can learn its classical description. We also observe the connection between the hardness of learning quantum states and quantum cryptography. We show that the existence of one-way state generators with pure state outputs is equivalent to the average-case hardness of learning pure states. Additionally, we show that the existence of EFI implies the average-case hardness of learning mixed states.
Open paper