Compare Papers

Paper 1

New circuits and an open source decoder for the color code

Craig Gidney, Cody Jones

Year
2023
Journal
arXiv preprint
DOI
arXiv:2312.08813
arXiv
2312.08813

We present two new color code circuits: one inspired by superdense coding and the other based on a middle-out strategy where the color code state appears halfway between measurements. We also present ``Chromobius'', an open source implementation of the möbius color code decoder. Using Chromobius, we show our new circuits reduce the performance gap between color codes and surface codes. Under uniform depolarizing noise with a noise strength of $0.1\%$, the middle-out color code circuit achieves a teraquop footprint of 1250 qubits (vs 650 for surface codes decoded by correlated matching). Finally, we highlight that Chromobius decodes toric color codes better when given *less* information, suggesting there's substantial room for improvement in color code decoders.

Open paper

Paper 2

Communication-efficient Quantum Algorithm for Distributed Machine Learning

Hao Tang, Boning Li, Guoqing Wang, Haowei Xu, Changhao Li, Ariel Barr, Paola Cappellaro, Ju Li

Year
2022
Journal
arXiv preprint
DOI
arXiv:2209.04888
arXiv
2209.04888

The growing demands of remote detection and increasing amount of training data make distributed machine learning under communication constraints a critical issue. This work provides a communication-efficient quantum algorithm that tackles two traditional machine learning problems, the least-square fitting and softmax regression problem, in the scenario where the data set is distributed across two parties. Our quantum algorithm finds the model parameters with a communication complexity of $O(\frac{\log_2(N)}ε)$, where $N$ is the number of data points and $ε$ is the bound on parameter errors. Compared to classical algorithms and other quantum algorithms that achieve the same output task, our algorithm provides a communication advantage in the scaling with the data volume. The building block of our algorithm, the quantum-accelerated estimation of distributed inner product and Hamming distance, could be further applied to various tasks in distributed machine learning to accelerate communication.

Open paper