NeuralAnalytics: Real-Time Brain Signal Analysis for my Bachelor's Thesis

December 01, 2025 - Neirth

After months of hard work, I’m thrilled to finally share the culmination of my Bachelor’s Thesis at the Universitat Politècnica de València. What started as a curious exploration into the intersection of neuroscience and software engineering has evolved into a fully functional brain-computer interface system capable of analyzing EEG signals in real-time using deep learning techniques.

This project, which I’ve named NeuralAnalytics, represents not just the end of my academic journey, but also the beginning of something that I believe has genuine potential to improve lives. The core idea is straightforward but ambitious: capture brain signals, process them in real-time, and translate specific neural patterns into actionable commands—like turning on a light bulb using only the power of thought.

The Vision Behind the Project

When I first started thinking about what my Bachelor’s Thesis should be, I knew I wanted something challenging—something that would push me beyond the comfortable boundaries of conventional software development. The idea of creating a system that could interpret human brain activity felt like the perfect intersection of my interests in embedded systems, deep learning, and real-time computing.

The project follows the regulatory framework of UNE-EN 62304 for medical device software, which added an extra layer of complexity but also gave me invaluable experience in developing software for critical applications. This wasn’t just about writing code that worked; it was about writing code that could be trusted.

Understanding the Technical Challenge

The fundamental challenge of analyzing EEG signals in real-time lies in the nature of the data itself. Electroencephalographic signals are notoriously noisy, with artifacts from eye movements, muscle contractions, and electrical interference constantly threatening to overwhelm the actual neural patterns we’re trying to detect. Additionally, the brain regions we’re interested in—specifically the occipital and temporal lobes—produce signals that vary significantly between individuals and even between sessions for the same person.

To tackle this, I designed a system architecture that separates concerns cleanly. The signal acquisition layer uses the BrainBit device, a consumer-grade EEG headband that captures data from four channels (T3, T4, O1, O2). This data flows through a preprocessing pipeline that normalizes and segments the signals before feeding them to the deep learning model for classification.

The classification task itself is framed around three states:

  • RED: A specific mental task indicating one command
  • GREEN: A different mental task indicating another command
  • TRASH: Everything else—noise, artifacts, or ambiguous signals

This three-class approach allows the system to be conservative, defaulting to “TRASH” when the model isn’t confident, which is crucial for a real-time control system where false positives could be problematic.

Deep Learning Architecture: CNN-LSTM Hybrid

After extensive experimentation with different model architectures, I settled on a hybrid approach combining Convolutional Neural Networks (CNN) for spatial feature extraction with Long Short-Term Memory (LSTM) networks for temporal pattern recognition.

The rationale behind this design is grounded in the nature of EEG data. The CNN layers excel at extracting local patterns—frequency components, amplitude variations, and cross-channel relationships—while the LSTM layers capture the temporal dynamics that distinguish one mental state from another.

class NeuralAnalyticsModel(nn.Module):
    def __init__(self):
        super(NeuralAnalyticsModel, self).__init__()

        # CNN Feature Extractor
        self.conv1 = nn.Conv1d(in_channels=4, out_channels=16, kernel_size=5, padding=2)
        self.bn1 = nn.BatchNorm1d(16)
        self.pool1 = nn.MaxPool1d(kernel_size=2, stride=2)
        
        self.conv2 = nn.Conv1d(in_channels=16, out_channels=32, kernel_size=3, padding=1)
        self.bn2 = nn.BatchNorm1d(32)
        self.pool2 = nn.MaxPool1d(kernel_size=2, stride=2)
        
        # LSTM Temporal Encoder
        self.lstm = nn.LSTM(input_size=32, hidden_size=32, num_layers=1,
                            batch_first=True, bidirectional=True)
        
        # Classifier
        self.classifier = nn.Sequential(
            nn.Linear(64, 32),
            nn.ReLU(),
            nn.Dropout(0.3),
            nn.Linear(32, 3),
            nn.Softmax(dim=1)
        )

The input data is processed in windows of 62 samples with 50% overlap, providing sufficient context for pattern detection while maintaining real-time responsiveness. Each window passes through z-score normalization per channel, ensuring consistent feature scales regardless of signal amplitude variations.

The Importance of Data Normalization

One aspect that significantly impacted model performance was the normalization strategy. Initially, I experimented with various approaches, but z-score normalization per window proved to be the most robust:

X norm = X - μ σ

Where μ is the mean and σ is the standard deviation of each channel within the window. This approach accounts for the natural drift in EEG baseline values and ensures that the model focuses on relative patterns rather than absolute amplitudes.

Rust-Based Inference Engine

For the inference side, I chose Rust as the implementation language. This decision was driven by several factors: deterministic memory management for real-time constraints, excellent performance characteristics, and the availability of the Tract library for ONNX model inference.

The model trained in PyTorch is exported to ONNX format, allowing clean separation between the training environment (Python with GPU acceleration) and the inference environment (Rust on a Raspberry Pi 4). This architecture mirrors what I explored in my previous post about creating a predictive system in Rust and PyTorch, though the complexity here is significantly higher due to real-time constraints.

impl Default for NeuralAnalyticsService {
    fn default() -> Self {
        let model = tract_onnx::onnx()
            .model_for_path("assets/neural_analytics.onnx")
            .expect("Failed to load model")
            .with_input_fact(0, InferenceFact::dt_shape(f32::datum_type(), tvec!(1, 62, 4)))
            .expect("Failed to set input shape")
            .into_optimized()
            .expect("Failed to optimize model")
            .into_runnable()
            .expect("Failed to create runnable model");

        NeuralAnalyticsService { model }
    }
}

The state machine architecture handles the continuous signal flow, managing the buffering, preprocessing, and inference pipeline while maintaining strict timing constraints. The system runs on a Raspberry Pi 4 Model B (8GB) with a real-time operating system configuration to guarantee response times.

Hardware Integration and Smart Home Control

One of the most satisfying aspects of this project was the tangible output: controlling a Tapo Smart Bulb using brain signals. When the model detects a valid “GREEN” pattern with sufficient confidence, it triggers a state change in the smart bulb. The feedback loop is immediate and visceral—you think, and the light responds.

The BrainFlow SDK handles the Bluetooth communication with the BrainBit device, abstracting away the low-level protocol details and providing a clean streaming interface. This allowed me to focus on the signal processing and machine learning aspects without getting bogged down in hardware-specific implementation details.

Project Structure and Code Organization

The project follows a modular structure that separates concerns across different packages:

NeuralAnalytics/
├── packages/
│   ├── neural_analytics_core/     # Core Rust implementation
│   ├── neural_analytics_data/     # Data capture utilities
│   ├── neural_analytics_gui/      # GUI for signal visualization
│   └── neural_analytics_model/    # PyTorch model training
├── docs/                          # LaTeX thesis documentation
└── dataset/                       # Training data organized by class

Each package has clear responsibilities, and the boundaries between them are well-defined. This modular approach made iterative development much easier—I could refine the model training pipeline without touching the Rust inference code, and vice versa.

Challenges and Lessons Learned

The journey wasn’t without its obstacles. One particularly challenging aspect was dealing with the variability in EEG signals between recording sessions. A model that performed excellently on one day’s data could struggle the next. This led me to implement more robust augmentation strategies and to be more careful about the stratification of training and validation sets.

Another lesson was the importance of end-to-end testing. It’s one thing to achieve high accuracy on pre-recorded datasets, but real-time performance with a live signal stream is a different beast entirely. Latency, jitter, and the psychological pressure of a live demonstration all introduced factors that weren’t present in offline evaluation.

Media Coverage and Public Recognition

I was fortunate that this project caught the attention of major Spanish media outlets. El Español published an article about the project, and I was invited to demonstrate the system live on Antena 3’s “Y Ahora Sonsoles” program.

The public interest in this project has been overwhelming and humbling. It reinforced my belief that technology, when applied thoughtfully, has the potential to genuinely improve people’s lives—particularly for those with motor disabilities who could benefit from brain-computer interfaces for communication and control.

Technical Specifications and Results

For those interested in the technical details, here’s a summary of the system specifications:

Component Specification
EEG Device BrainBit (4 channels: T3, T4, O1, O2)
Processing Platform Raspberry Pi 4 Model B (8GB)
Model Architecture CNN-LSTM Hybrid (16→32 conv, 32×2 LSTM)
Window Size 62 samples with 50% overlap
Input Normalization Z-score per channel per window
Output Classes 3 (RED, GREEN, TRASH)
Inference Library Tract (ONNX runtime for Rust)
Smart Device Tapo Smart Bulb

The model achieves reliable classification performance in controlled conditions, with the system maintaining real-time responsiveness on the constrained hardware platform.

Future Directions

While this project represents the completion of my Bachelor’s Thesis, I don’t consider it finished. There are numerous avenues for improvement and extension:

  • Expanded command vocabulary: Moving beyond binary control to multiple distinct commands
  • Personalization pipelines: Real-time adaptation to individual users without extensive retraining
  • Alternative output modalities: Integration with wheelchair controls, computer cursors, or speech synthesis
  • Edge deployment optimization: Quantization and pruning for even lower latency

The field of brain-computer interfaces is evolving rapidly, and I’m excited to continue contributing to it.

Conclusion

This project has been one of the most challenging and rewarding experiences of my academic career. It pushed me to learn about domains I had never explored before—neuroscience, real-time systems, regulatory compliance for medical devices—while also deepening my expertise in areas I was already passionate about, like deep learning and Rust development.

The complete source code is available on GitHub under the GPL-3.0 license. The repository includes the training code, the Rust inference engine, documentation, and everything needed to replicate or extend this work. I hope it serves as a useful reference for anyone interested in exploring the fascinating intersection of neuroscience and software engineering.

Credits

The header image of this post is made using Midjourney AI.