EEG Biometric Authentication
Ever thought about using your brain waves as a password? That's exactly what this project explores - using EEG (electroencephalogram) signals for biometric authentication. I worked on reproducing a subset of research by Alzahab et al. that investigates how listening to different types of audio can make brain wave patterns more unique and reliable for identifying individuals.
Overview
While we typically think of biometric authentication in terms of fingerprints or facial recognition, brain signals offer some fascinating advantages. They're incredibly difficult to fake (you can't just make a copy like you could with a fingerprint), and they change based on your mental state - meaning they only work when you're alive and conscious. This project specifically looks at how playing different types of audio (like songs in your native language vs. a foreign language) affects how well we can identify someone from their brain activity.
The Brain Signals
The project uses EEG recordings, which measure tiny electrical signals from your brain using electrodes placed on your scalp. The dataset includes recordings from 20 volunteers, collected using four electrodes at specific locations:
- T7: Left temporal region (near your left ear) - involved in processing auditory information
- F8: Right frontal region (near your right forehead) - involved in cognitive processing
- Cz: Central midline (top of your head) - important for motor and sensory processing
- P4: Right parietal region (back-right of your head) - involved in sensory integration
These positions are chosen according to the standard 10-10 international system for EEG electrode placement, essentially a map of standardized locations on the scalp.
The signals were recorded at 200 Hz (meaning 200 measurements per second), while participants either:
- Rested quietly with eyes open or closed
- Listened to songs in their native language
- Listened to songs in a foreign language
- Listened to neutral instrumental music
The audio was played either through normal in-ear headphones or bone-conducting headphones (which transmit sound through your skull bones).
How It Works
The analysis involved several key steps:
-
Cleaning Up the Signals
- First, we took the raw EEG recordings and removed unwanted noise
- Applied specific filters to focus on brain activity between 1-40 Hz
- Used a notch filter to remove interference from power lines (50 Hz hum)
-
Finding Patterns
- Extracted 48 different features from the brain signals
- Some features looked at basic signal properties like amplitude
- Others examined frequency patterns across different brain wave bands (delta, theta, alpha, beta, and gamma)
-
Machine Learning
- Built a neural network to learn patterns specific to each person
- Used these patterns to try to identify individuals from their brain signals
- Compared performance between different listening conditions
What We Found
The results were pretty interesting:
- Using auditory stimuli improved identification accuracy from 68.51% to 72.81%
- That's about a 6.65% relative improvement compared to just sitting quietly
- In-ear headphones worked slightly better than bone-conducting ones
- The language of the audio didn't really matter - what mattered was just having some audio input
Key Takeaways
This project confirms that using brain signals for identification is not just science fiction - it's a real possibility with some unique advantages. Adding audio stimuli makes the system work better, probably because it creates more distinct brain activity patterns. While we're still a way off from using this in everyday life, it's a fascinating look at how our brains process information and how that might be used for security in the future.
The code and technical implementation are available in the project repository for those interested in the computational details.