QUANTUM-ENHANCED NEURAL NETWORKS FOR MOTOR IMAGERY BRAIN-COMPUTER INTERFACES

Advanced Brain-Computer Interface Technology

Background: Imagine being trapped in your own body—your mind sharp and aware, but unable to speak, move, or communicate with the people you love. For paralyzed veterans and those battling ALS, this isn't imagination—it's daily reality. But what if we could give them their voice back? What if advanced technology could translate their thoughts directly into communication, bypassing damaged nerves and muscles entirely? Our breakthrough brain-computer interface technology is making this possible. We're not just building another assistive device—we're creating a bridge between mind and machine that could restore the fundamental human ability to communicate and connect. For our heroes who sacrificed their mobility in service to our country, and for families watching ALS slowly steal their loved one's ability to communicate, this technology represents hope. Real, measurable hope backed by groundbreaking results.

BCI 4-Class

Our quantum-inspired neural network approach for motor imagery classification achieved promising results on the well-established BCI Competition IV Dataset 2a benchmark. Our method has reached 99.36% accuracy with sub-12 millisecond latency (with no signs of overfitting). Though there's still much work ahead to validate these results across larger datasets and real-world applications, we're encouraged that our quantum-inspired approach appears to meaningfully advance non-invasive brain-computer interface classification, potentially bringing us closer to more reliable and responsive BCI systems for individuals with motor disabilities and broader accessibility applications.

ALS / BCI 9-Class

We developed a quantum-inspired neural network for ALS patient communication using the latest EEGET-ALS dataset from 2024. This dataset contains simultaneous brain EEG and eye-tracking data from 6 ALS patients and 170 healthy individuals across 9 different communication scenarios. Our model achieved 95.42% accuracy in classifying 9 communication types including head movements, basic needs, hygiene requests, medical communication and others. Most traditional motor imagery systems achieve 60-80% accuracy, with recent deep learning approaches reaching around 90% for simpler 2-class tasks. 18.1% of data involves head movements and 32% combines eye-tracking activities. The model employs a combined neural and ocular signal processing approach, using EEG-dominant classification enhanced by eye tracking to address the reality that ALS patients progressively lose motor function while typically retaining eye control. This multimodal BCI design maximizes communication capability across all stages of ALS progression, from early-stage patients who retain some motor function to advanced-stage patients who rely primarily on preserved eye movement and residual neural signals. While there is still quite a bit of work to completely validate this approach, some of the initial findings are promising. Below are some of the detailed statistics.