Hacker News new | past | comments | ask | show | jobs | submit login

VR is neat, but I'm absolutely inspired by the idea of improved human-computer interfaces like those described and their more mundane uses. We have eye tracking, the ability to read electrical signals and positioning from hands or head, the ability to feedback to the body through vibration (or sound), and existing high quality peripherals like keyboards for high precision entry and correction. There has to be a way to combine all of this into a better vim.

A couple binary inputs for CTRL and SHIFT from reading brainwaves seems doable. Feedback on whether they're active from different high frequency vibrations on the hands, neck, or ears.

There are lots of easy wins here if I can just push a couple mental buttons with my brain. Trigger scrolling or text selection based on eye movement. Select a buffer or macro based the mental buttons I'm pushing. You basically get a binary digit for every mental button you're capable of simultaneously tracking.

I'm home sick, and maybe it's the fever talking, but I think I'm going to do some basic research and start hacking something together. Very small chance of success, but I want mental modifier keys with vibrational feedback!




Summary: BCIs are still bad, use foot pedals as suggested in the sibling comment.

I don't want to rain on the parade but BCI (brain computer interfaces here EEG that is interpreted as input for something) is far from being usable for everday things. The technology sort of works but it is slow and unreliable. Most reliable method was P300 [0] which requires several repetitions of a signal someone decided to select. Entering letters for example can be done by flashing rows and columns of a matrix of these letters and concentrating on the one to choose. It's reactive in the sense as it will only trigger when an expected stimulus is detected. A lot less reliable was a system trained to recognize imagined motions (raising the left or right leg or arm as different categories) and worst method I saw was a simple feedback loop between EEG for moving something on the screen in one of two directions. This rarely worked even though it was a simple binary choice. Any switch will be better than this for now.

[0] https://en.wikipedia.org/wiki/P300_(neuroscience)


Totally fair. There's a few things I'd like to point out:

  * I get to push the EEG button with just my brain, which is neat and fun.
  * I get to build multiple high resolution EEGs into something that doesn't look terrible like a pair of headphones or a hat, which is fun (yet flammable).
  * I play a few instruments (most recently bought a violin, which I'm still terrible with). It took years to become good at each instrument. I assume that it would take years to become good at using a BCI and properly adjust it to my brain.
  * I'm working on this with the initial assumption that I'm most likely going to fail, but I'm going to have a good time.
Edit: * It also gives me an excuse to work on this: https://news.ycombinator.com/item?id=21661567


I'm neither a doctor not a psychologist but here is a wild guess: I would assume that it takes about as long to train yourself to work with a BCI as it takes to e.g. learn to reliably wiggle one ear, raise an eyebrow, just any voluntary movement that you do not control yet. Reasoning: in both cases you would have a feedback loop and would need to learn to reliably create the appropriate output and that it does not matter a lot whether the output goes to actual muscles or via EEG to trigger a response.


Maybe you want to give this a try first https://www.emacswiki.org/emacs/FootSwitches Seems a lot more reliable and easier to implement than messing with EEGs.


I can't imagine this not being the future of how we interface with our computing devices, way before proper neural control interfaces.

I'm curious what research is being done on this front.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: