> We will make the code to reproduce these results publicly available
I'm excited about this trend, it makes it easier for everyday software engineers to tinker around with cutting edge tools.
I think of innovation as a pipeline starting from scientific insights, to engineering advances, to improvements in the consumer experience. If research communities continue to borrow more ideas from open source culture, it could really improve the speed of that pipeline.
Whenever I read something like "We will make the code to reproduce these results publicly available", instead of making be happy, it makes me sad that so many people never follow through. That's a huge disappointment if the paper's ideas really would've helped you.
Yes it must be the norm, but if the code isn't available at peer review time, chances are nonzero it's never going to be.
Our code is already available in very chaotic, not well-documented form at https://github.com/robintibor/braindecode/.
I am working on producing a clean version of this :)
They reported an accuracy of baseline FBCSP and ConvNets on the BCIC dataset around ~68%-70%, which is reminiscent of early WER for speech recognition (prior to the advent of specialized speech architectures). However, when you consider the datasets (BCIC 2a and 2b) are only using 22 scalp electrodes (for 2a) and 3 (for 2b) at 250 Hz, that's pretty amazing. That's analogous to using something like 8-bit audio for speech recognition.
Clearly there is lots of room for improvement here, both in neural net architectures, measurement technology, and preprocessing. One technique the authors didn't explore is source localization [1], which typically requires much higher density (somewhere on the order of 128-256 active electrodes), but offers much higher spatial resolution (up to ~5mm isotropic). Given that most of the EEG datasets are recorded on less than 32 channels, I wonder how much more signal can be detected from such low resolution methods, or if we're approaching the theoretical maximum channel capacity of 32 scalp electrodes.
It's like Feynman's analogy of measuring the height of corks in a swimming pool to determine the physics of an object that hit its surface. The more corks you have, the more information about the source (location, velocity, trajectory, geometry) you can reconstruct.
I don't see why this research couldn't be applied to empirical data from an arbitrary system of coupled oscillators.
I am interested in EEG and fNIRs, but there are drawbacks. Naively, people will try and study this data without first doing a necessary back-of-the-napkin.
EEG and fNIRs have hard physical limitations. EEG is limited to pick up only large-scale EM field activity, where the higher resolution perturbations and effects are averaged out. This is because of an increased measurement distance and noise that is acquired through the skull and other intermediary tissue. This is unacceptable since the current scientific consensus is high frequency phase and activity is fairly important for information coding.
On the bright side, a lot of information might also be encoded in larger scale synchronized oscillation that happens in the brain (the stuff that EEG picks up on). This space is obviously lower in dimension.
The only work-around for this hard information limit is to explore invasive BCI technology (e.g. tetrodes connected to your neurons).
Relatively speaking, this isn't difficult for scientific laboratory research because:
1. We don't care about invasive surgery on rats.
2. We don't care how comfortable rats are.
3. We don't care how mobile rats are.
On the other hand, for commercial purposes, it is not feasible to stick a wired-tetrode array into a human brain yet. We can't afford to lose a human. Engineering on the invasive BCI frontier is incredibly primitive right now.
FMRI has very poor temporal resolution and moderately poor spatial resolution-- it's like trying to figure out the workings of your computer by using a meat thermometer.
While yes fMRI lacks temporal resolution, the spatial resolution is way better compared to EEG. You also have the benefit of 3D spatial data while EEG only provides 2D data from the scalp. If temporal resolution is important, MEG can be used and has better spatial resolution than EEG.
Generally, EEG's benefit is that it's cheaper (either in terms of data collection or for showing your method has the potential to be commoditized).
A portable fMRI machine... that's the stuff of dreams. For sure that would change the world for the better but I really don't see how you would even begin to tackle that.
There's a decent amount of work that's been done already in this area, see [1] and follow up work by the same authors for a good example.
While the approach is "cool" there are still many unanswered questions of what exactly it tells us about the brain.
See [2], a critical review of [1], for an indication of what I mean.
They even wrote "We will make the code to reproduce these results publicly available". That should be the norm. Kudos.