QUANTUM-ENHANCED NEURAL NETWORKS FOR MOTOR IMAGERY BRAIN-COMPUTER INTERFACES
Advanced Brain-Computer Interface Technology
Background
Imagine being trapped in your own body. Your mind is sharp and fully aware, but you cannot speak, move, or communicate with the people you love. For paralyzed veterans and those living with ALS, that is not a thought experiment. It is daily reality.
Advanced technology can now translate thoughts directly into communication, bypassing damaged nerves and muscles entirely. We are not building another assistive device. We are building a bridge between mind and machine, one that could restore the most fundamental human ability: to communicate and connect. For veterans who sacrificed their mobility in service to this country, and for families watching ALS slowly take away a loved one's voice, these results represent real, measurable hope.
BCI 4-Class
Our quantum-inspired neural network approach to motor imagery classification was benchmarked against the well-established BCI Competition IV Dataset 2a. The method achieved 99.36% accuracy with sub-12 millisecond latency and showed no signs of overfitting. Validation across larger datasets and real-world applications remains ongoing, but the early results suggest this approach meaningfully advances non-invasive BCI classification. That has direct implications for individuals with motor disabilities and broader accessibility applications. All training data is publicly available and meets HIPAA guidelines.
ALS / BCI 9-Class
We developed a quantum-inspired neural network for ALS patient communication using the EEGET-ALS dataset, published in 2024. The dataset contains simultaneous EEG and eye-tracking data from 6 ALS patients and 170 healthy individuals across 9 distinct communication scenarios. Our model achieved 95.42% accuracy classifying all 9 communication types, including head movements, basic needs, hygiene requests, medical communication, and others.
For context, most traditional motor imagery systems achieve 60 to 80% accuracy. Recent deep learning approaches have reached around 90% on simpler 2-class tasks. Within this dataset, 18.1% of data involves head movements and 32% combines eye-tracking activities.
The model uses a combined neural and ocular signal processing approach. EEG-dominant classification is enhanced by eye-tracking data, which reflects a practical reality: ALS patients progressively lose motor function while typically retaining eye control. This multimodal design maintains communication capability across all stages of ALS progression, from early-stage patients with some remaining motor function to advanced-stage patients who rely almost entirely on eye movement and residual neural signals. Initial findings are promising, though further validation work continues. All training data is publicly available and meets HIPAA guidelines.