Compare Papers

Paper 1

SNG-Based Real-Time Plasma Control System: From Operator Calculus to Hardware Implementation

Durhan Yazir

Year
2026
Journal
Zenodo (CERN European Organization for Nuclear Research)
DOI
10.5281/zenodo.18988234
arXiv
-

Controlling a fusion plasma is like trying to balance a spinning top inside a hurricane—while the top is millions of degrees hot and the hurricane is magnetic. Instabilities can tear the plasma apart in milliseconds, ending the reaction. For decades, we've been flying blind, reacting too slowly. Now, imagine giving the tokamak a brain—a super-fast co-processor that senses what's happening and reacts in microseconds, not milliseconds. This brain doesn't just follow pre-programmed rules; it understands the plasma through four simple mathematical operators derived from Spectral Nod Theory. What are these operators? Think of them as instincts: · One senses turbulence and dampens it before it grows. · One watches the density and gently resets it if it gets too high, preventing a collapse. · One skips the Coulomb barrier—the fundamental obstacle to fusion—by leveraging collective plasma effects to boost reactivity by up to 3.4×. · One detects instabilities like ELMs and reverses them in microseconds. We've designed a complete hardware-software system to bring these instincts to life. On the hardware side, an FPGA-based co-processor (like a specialized graphics card) sits alongside the tokamak's existing control computer, processing sensor data in under microseconds—fast enough to catch instabilities before they destroy the plasma. On the software side, a "digital twin" simulates the entire tokamak, allowing us to optimize the operators offline and even predict future behavior during experiments. The system is designed to plug into existing tokamaks like China's EAST and America's DIII-D, giving them an "upgrade kit" that could dramatically improve performance. We've estimated resource requirements, validated against current technology (FPGA-based machine learning already runs at 4.4 microseconds on DIII-D), and laid out a phased deployment roadmap. This isn't just theory—it's a blueprint for building the world's first quantum-inspired plasma operating system. The operators that dance at the Planck scale may soon dance through silicon, bringing us one step closer to practical fusion energy.

Open paper

Paper 2

Rapid Prediction of Hot-Carrier Relaxation by Learning of Nonadiabatic Hamiltonians with Graph Neural Networks.

Meng K, Lu H, Xu X, Prezhdo OV, Long R

Year
2026
Journal
Journal of chemical theory and computation
DOI
10.1021/acs.jctc.5c02178
arXiv
-

An electron-vibrational Hamiltonian fully encodes corresponding quantum dynamics; however, extracting the dynamics still relies on time and memory-consuming trajectory-based nonadiabatic molecular dynamics (NAMD) simulations, typically stochastic surface hopping. Here, we develop a general graph neural network, artificial intelligence ab initio NAMD (AINAMD) that establishes an end-to-end mapping from Hamiltonian to hot carrier relaxation dynamics. We validated the generality of AINAMD across multiple materials, including a zero-dimensional Si quantum dot (QD), a one-dimensional carbon nanotube (CNT), a two-dimensional twisted MoS/WS bilayer, and a three-dimensional soft-lattice MAPbI perovskite. With only 10% training data, AINAMD can rapidly and accurately generate picosecond energy decay curves for hot electron and hot hole relaxation for the remaining 90% Hamiltonians, while delivering a computational speed-up of more than 6 orders of magnitude compared to standard CPU-based NAMD simulations. Moreover, AINAMD can also map directly the Hamiltonian to the carrier relaxation time, bypassing generation of the energy decay curves and demonstrating the ability to handle complex NAMD tasks. Further, by projecting high-dimensional Hamiltonian encoding features into a two-dimensional space with unsupervised learning, we demonstrate that AINAMD can effectively distinguish Hamiltonian types, verifying its ability to identify a particular system (QD, CNT, MoS/WS and MAPbI) and a charge carrier (electron or hole). Overall, the developed AINAMD approach provides a novel computational methodology and a conceptual framework for accelerating NAMD simulations with machine learning by many orders of magnitude.

Open paper