Every 40 seconds someone has a stroke. Many lose all voluntary movement. Brain-computer interfaces exist to give those people their voice back — but the decoding layer keeps failing.
Scalp EEG captures millions of neurons firing through bone and tissue simultaneously. The signal-to-noise ratio is fundamentally poor — non-stationary, noisy, and highly variable across recording sessions and individuals.
No two brains produce the same EEG signature. Classical models trained on one person fail on another. Fixed feature spaces cannot adapt to the biological diversity of human neural architecture.
Current models treat each EEG channel independently. But the brain works like an orchestra — 64 channels interfering constructively and destructively. Classical kernels hear one violin. We needed to hear all 64.
Classical SVMs achieve ~64.6% accuracy on motor imagery classification — insufficient for a patient whose wheelchair depends on every correct command. When a BCI fails, a paralyzed patient cannot move. 91.3% recall isn't a metric. It's a lifeline.
No quantum computer required. We applied interference-based probability mathematics from quantum physics as a kernel function — measuring how EEG signals interact, not just how far apart they are.
PhysioNet EEG Motor Movement/Imagery Dataset. 109 subjects, 654 recordings segmented into approximately 9,500 trials. Imagined left vs. right hand movement. 64 channels at 160 Hz. Fully de-identified public data.
PhysioNet · 109 subjects · ~9,500 trialsBand-pass filtering at 8–30 Hz to isolate mu and beta motor rhythms. ICA artifact removal. 5-fold stratified cross-validation applied uniformly. Identical preprocessing pipeline across all 9 benchmark models.
8–30 Hz · MNE-Python · 5-Fold CVClassical kernels ask: how far apart are two signals? Our quantum kernel asks: how much do they interfere? Welch's method extracts PSD across α, β, and μ bands. The Born Rule maps these into a quantum-inspired similarity space that captures non-linear neural interference patterns classical kernels cannot represent.
Born Rule · Amplitude-Phase EncodingQSVM tested against Classical SVM, CNN1D, CNN2D, LSTM, EEGNet, DeepConvNet, and additional architectures on identical data. Statistical validation via Wilcoxon signed-rank test with Bonferroni correction.
Wilcoxon p = 0.032 · Bonferroni-CorrectedQSVM achieved 73.4% accuracy and 81.4% F1-score — outperforming Classical SVM by 8.8pp while matching GPU-dependent deep learning on standard CPU hardware.
Several deep learning models scored marginally higher accuracy. Here is why that framing misses the point entirely.
Deep learning requires expensive GPU hardware. The average rural clinic, field hospital, or home caregiver does not have one. QSVM runs on a standard laptop and matches deep learning accuracy — the difference between technology in a research lab and technology that reaches a patient in rural Florida or rural Kenya.
Standard CPU · AccessibleDeep learning needs thousands of labeled samples. Getting one ALS patient to produce clean EEG data takes months of clinical work. QSVM achieves 73% accuracy under 500 training samples and degrades far less when data is reduced. When labeled neurological data is scarce — which it almost always is clinically — quantum-inspired kernels are often the only viable path.
Data Efficient · Low-ResourceThe FDA will not approve a black-box model to control a prosthetic limb. Clinicians need to understand why a model made a decision. The quantum kernel matrix is fully interpretable — you can see exactly how similar two neural states are. Deep learning cannot offer that transparency at the required clinical resolution.
Interpretable · RegulatoryOur contribution is not beating deep learning on a benchmark. It is matching deep learning accuracy while solving the three problems that prevent deep learning from ever leaving the lab and reaching the patients who need it most.
71st Florida State Science & Engineering Fair · March 31–April 2, 2026 · Lakeland, Florida