Hello, It took almost 2 years to develop and publish an article on this device. This is #open_source - brain-computer interface I am not sure that everybody here have access to springer library, sorry, this paper is not open-access. but technical detail in my GitHub page https://github.com/Ildaron/ironbci
Not every journal is a viable option, not every journal has the same rating. In theory everyone would prefer to publish in an open access journal but sometimes there's an extra fee for that or there's no good (reputable, good impact factor and appropriate for that topic) open access journal for your project...
re: going against your CV, do you have any insight into why this might be? Open access journals are not always predatory or fraudulent publications. It seems to me that as long as the journal is peer reviewed, publishes high quality research, and has a reasonable impact factor, tenure committees being biased against them is irrational.
You have to understand this: Academia runs on clout. Not quality, not replicability, but reputation, however it is earned.
That means that an amazing paper published in PLoS ONE is generally worse for your career than an unreplicable pile of drek in Nature. The quality of the paper, barring notable impact outliers, is secondary to the journal it's published in.
The problem isn't the reality of journal quality, it's just the common perception. The unfortunate fact is that you have to cater to the biases of wider academia.
While more established researchers can turn down the chance to publish in one of the big, closed journals, it's an opportunity that many cannot turn down.
> as long as the journal is peer reviewed, publishes high quality research, and has a reasonable impact factor, tenure committees being biased against them is irrational
I agree. And stronger than that, recently some funding agencies (e.g. the Dutch NWO) favor/require open access for works that come out of projects they fund, and they have budget set aside for that. The trend is hopefully changing, but it takes time.
So this seems to based around the Texas Instruments ADS1299 chip, which is described by TI as:
The ADS1299-4, ADS1299-6, and ADS1299 devices are a family of four-, six-, and eight-channel, low-noise, 24-bit, simultaneous-sampling delta-sigma (ΔΣ) analog-to-digital converters (ADCs) with a built-in programmable gain amplifier (PGA), internal reference, and an onboard oscillator. The ADS1299-x incorporates all commonly-required features for extracranial electroencephalogram (EEG) and electrocardiography (ECG) applications. With its high levels of integration and exceptional performance, the ADS1299-x enables the creation of scalable medical instrumentation systems at significantly reduced size, power, and overall cost.
The product page [1] further states that talks SPI with the host system (an STM32 in this project, nice).
US price for the ADS1299 on Digi-Key seems to be in the region of $50 in single-quantity.
Thanks for sharing! Did you have a look what else is needed to attempt to build this? There are some gerber files for the pcb boards + probably the components soldered on top + wires + clips + headband?
Edit: the paper actually mentions the cost of ironbci to be $350
I read some books by Nicolelis' several years ago and it seemed back then that really useful BCI's still required invasive surgery because not only do you need to gather signals from a large number of neurons you also need to be able to distinguish them (and in EEG the signals are merged together and then further obscured by the effect of the skull etc.)
It will be interesting to see how much adoption Neuralink can get with the invasive approach.
Apart from BCI (which after dropping out a PhD in this field, I am extremely skeptical of), EEG is used to measure things like sleep-wake cycles, as well as predicting seizures in epilepsy patients (which, to be fair, is also not a hugely proven technology). I believe there were also some tests using it to measure level of attention in fighter pilots.
Basically a whole lot of bullshit papers in the field, coupled with negligible improvement and a belief that AI will somehow magically solve the problem of garbage data.
Thanks, I appreciate your reply! I've worked in AI in past jobs - though I don't have an academic background in it - and I would definitely agree with the issue of garbage data, as I suspect would many real-world practitioners.
> I believe there were also some tests using it to measure level of attention in fighter pilots.
I would not put much (any) stock into this. The army has put out RFPs for using magnetic tourniquets to induce clotting without external pressure. With magic. Just because the military pays someone to try something does not mean that it is not entirely bullshit.
A fair comment and a good thing to remember when we think about computer hardware or facilities for people to use in general: Not everyone is "able-bodied" and many of us are facing all sorts of physical, environmental or psychological challenges.
How is this effort different from what the people over at https://openbci.com/ is doing? Seems like the problem is hard enough that people should try working together as much as possible, rather than starting new, independent efforts, unless the approaches are wildly different.
The more compelling challenge is a comfortable, yet durable electrode that has good impedance without the need of gel. OpenBCI is one of a few organizations that have a good dry electrode.
Had this been a bit cheaper I might have went ahead and bought it. The idea of being able to embed this thing in a hat, and search for songs on my phone just by thinking of them is awesome
Unrelated, I've been trying to think about a data store that's part of me somehow. I guess we carry our phones with us everywhere but yeah. So it's your data literally on you and accessed when connected wirelessly or by usb.
Have you heard of a flash drive lol.
The thing is this would store like your collective knowledge's/notes you've written over the years and it would be secure.
Related to this for the thought of hybrid mind/brain interface is an ideal part of it. Like thinking to control devices by looking at it.
Yeah I've heard of interesting approaches other than "off the shelf" stuff like Evernote/Notion/One Note/etc... some people have their own CLI tools, etc... I spend most of my time in the browser so I'm working on a chrome-extension oriented approach, but still, that personal data store not tied to a cloud.
Hi! Current neural engineering PhD candidate here. Your question is one I had early on when learning about brain modeling.
Noninvasive EEG sensors measure the synchronous activity of millions of neurons. While circuit equivalents of individual neurons are a well established idea (see Hodgkin-Huxley models for an early example), this approach does not scale to models of the full brain. There is research into using neural mass or neural field models as generative models of EEG/MEG, but those are statistical models over larger volumes of brain.
The brain is not an electric circuit. It's more like a hydraulic or osmotic circuit. Thought is powered by tiny flows of sodium ions, not flows of electricity. Specifically, sodium diffuses in blood/water much faster than charge dissipates through brain tissue.
When a lot of sodium flows into a cell (a neuron fires) it creates a positive voltage difference across the cell membrane. The concentration of sodium ions floating in the extracellular space remain essentially constant (sodium rapidly flows in from the outside area) even as the charge inside the neuron rises.
So as the neuron is at the peak of its action potential it acts like a 70 mV voltage source relative to the outside of the body. The EEG measures that voltage. It's like holding up a flourescent light near a Tesla coil[1]- electrodes closest to the neuron see a higher voltage from it, because they're measuring the ratio of resistance between 1. The electrode and neuron, through the head, and 2. the resistance to reference ground through the EEG itself. Because the head has quite high resistance (~megaohm?) EEGs require some serious input impedance.
On average neurons spend ~1 out of every ~10,000 milliseconds firing. If you had ~100 electrodes and 1 million neurons, you could triangulate to mostly work out where every single neuron is when it fires. In a human brain, there are ~billion neurons per electrode, with ~100,000 firing at any one time. EEGs are only really picking up on coordinated groups of millions of neurons firing together. Inferring where the actual neurons are is inherently a blurry guess. It is possible, but afaik (and I am very lay) most visualizations only interpolate surface potentials. Implanted electrodes can locate individual neurons, which is incredibly cool.