Compare Papers
Paper 1
Twisted Fiber Bundle Codes over Group Algebras
Chaobin Liu
- Year
- 2026
- Journal
- arXiv preprint
- DOI
- arXiv:2604.01478
- arXiv
- 2604.01478
We introduce a twisted fiber-bundle construction of quantum CSS codes over group algebras \(R=\mathbb F_2[G]\), where each base generator carries a generator-dependent \(R\)-linear fiber twist satisfying a flatness condition. This construction extends the untwisted lifted product code, recovered when all twists are identities. We show that invertible twists (satisfying a flatness condition) give a complex chain-isomorphic to the untwisted one, so the resulting binary CSS codes have the same blocklength \(n\) and encoded dimension \(k\). In contrast, singular chain-compatible twists can lower boundary ranks and increase the number of logical qubits. Examples over \(R=\mathbb F_2[D_3]\) show that the twisted fiber bundle code can outperform the corresponding untwisted lifted-product code in \(k\) while keeping the same \(n\) and, in our examples, the same minimum distance \(d\).
Open paperPaper 2
On Emergences of Non-Classical Statistical Characteristics in Classical Neural Networks
Hanyu Zhao, Yang Wu, Yuexian Hou
- Year
- 2026
- Journal
- arXiv preprint
- DOI
- arXiv:2603.04451
- arXiv
- 2603.04451
Inspired by measurement incompatibility and Bell-family inequalities in quantum mechanics, we propose the Non-Classical Network (NCnet), a simple classical neural architecture that stably exhibits non-classical statistical behaviors under typical and interpretable experimental setups. We find non-classicality, measured by the $S$ statistic of CHSH inequality, arises from gradient competitions of hidden-layer neurons shared by multi-tasks. Remarkably, even without physical links supporting explicit communication, one task head can implicitly sense the training task of other task heads via local loss oscillations, leading to non-local correlations in their training outcomes. Specifically, in the low-resource regime, the value of $S$ increases gradually with increasing resources and approaches toward its classical upper-bound 2, which implies that underfitting is alleviated with resources increase. As the model nears the critical scale required for adequate performance, $S$ may temporarily exceed 2. As resources continue to grow, $S$ then asymptotically decays down to and fluctuates around 2. Empirically, when model capacity is insufficient, $S$ is positively correlated with generalization performance, and the regime where $S$ first approaches $2$ often corresponding to good generalization. Overall, our results suggest that non-classical statistics can provide a novel perspective for understanding internal interactions and training dynamics of deep networks.
Open paper