Hacker News new | past | comments | ask | show | jobs | submit login
Cursorless – A spoken language for structural code editing (github.com/cursorless-dev)
175 points by yewenjie on Sept 3, 2022 | hide | past | favorite | 68 comments



Cursorless is fantastic, this saved my bacon last year when I couldn't use my hands for months. I donate to Pokey because I'll need this again in the future, though right now I'm typing okay.

I think able-bodied programmers will probably get annoyed setting it up, setting talon up, learning the phonetic alphabet and basic commands, but like.. if your hands don't work, this can literally save your career. this is a disability aid for creating things, it's vital and empowering!

the hard part IMO is getting back into the coding headspace you had when you could type. it's different speaking code, formatting it in your head before you say command phrases. everything's a little awkward, you can't get into the zone the same way.


I have some very mixed feelings about talon. I looked into it a decent bit when I started having hand issues as well, and never got anywhere useful with it.

For one, it needs to be programmed to do anything. This gets you all the options, which is wonderful, but also is a bit of a problem when you're looking into because you can't, ya know, program. And it's not a simple thing to interact with. Lots of moving parts and various behind-the-scenes interactions.

For another, there's virtually zero documentation. Like, officially none, and unofficially not much more than a few paragraphs.

And to put the cherry on top, it's a closed source mess of a program. It's not easy to get much in the way of useful information out of it. You can poke around some in a Python REPL, but that'll basically just get you the interface. And somehow, it's one of the most useful sources of information, as at least you've got something to go on. And as mentioned, Python, so you don't have much in the way of language-level guarantees to try to rely on to figure stuff out either.

Yes, there is the popular user config. To be clear, I'm not trying to be unkind to it. It managed to overcome a lot, and is very useful to lots of people for many of the reasons I stated above. But. It's also clearly got it's roots as one person's personal config that got popular. There is so much in there. It's basically it's whole own thing to learn on top of talon.

In terms of learning from it, well, it doesn't seem trivial. It's basically it's whole own thing, but also how it all interacts is all talon. You're basically learning two separate yet intertwined systems at once and have to figure out what bits belong to what. It's also got various bits from multiple iterations of talon's interface, so the same thing may be accomplished multiple ways. It's actually quite well commented in some places about this fact, which is really nice. But it does make you wonder what isn't well commented...

Ultimately I just gave up. It seems like a potentially hugely useful thing, but actually trying to use it required learning a bunch of stuff I wasn't otherwise interested in, figuring out the minutia of a complex unstable interface the hardest ways possible, and then spending a ton of time writing my own program/config.

Aside from the time involved, the amount of typing required was prohibitive.

So yeah. My feelings on talon... mixed, to say the least. Or most, judging by the above.


I agree the non programmer use of Talon is still unpolished. That was an intentional prioritization as I work to solidify the underlying tech. Note that your best option for documentation right now is the Slack, where I am very active.

I have a clear vision for easier customization and adaptation in the long term.

Maybe check back every now and then, it's a labor of love and I plan to continue improving it for a long time.


Well, I am a programmer, I just wasn't able to do much programming due to physical limitations. Admittedly I'm not much of a Python programmer, and I wouldn't be surprised if a deeper understanding of not-just-a-shell-script Python idioms helped me get a little more out of the REPL poking. But uh, still.

As for Slack, I don't really consider that documentation. It's not really usable in the same way. It's more akin to IRC. It's potentially very helpful, even when the documentation isn't helping, but it's not the same thing. Maybe I'm just old, even though I'm not that old... I think...

Anyway, I hope this doesn't come off too negatively overall. I really like the idea of Talon and I'm comforted by it's existence, in case my hands start acting up again. From my attempts to investigate it I got a few glimpses into just how deep you've gone, even down into the speech model itself, to make it what it is. And I know any interface to something as complex as this is itself a large series of extremely ambiguous trade-offs that's just painful to make. And of course documentation itself takes time to create, and then needs to be kept up to date, and all that. I get it.

If it didn't seem to hold much promise, I wouldn't be so sour about failing to figure out how to use it effectively.

One thing that might be worth considering is moving to a source-available model, if full on Open Source doesn't suit your goals. Having the option to try to figure things out by looking at the code could be very helpful.


> unofficially not much more than a few paragraphs

There's a lot more than a few paragraphs on the wiki (which is linked right after the setup instructions in the official docs), including ~30 pages about the scripting system alone: https://talon.wiki/unofficial_talon_docs/

> As for Slack, I don't really consider that documentation

I think you should try asking questions on this particular Slack before you write it off and recommend that I do something else. It is my official support channel and I am extremely active and helpful there.

> Having the option to try to figure things out by looking at the code could be very helpful

One thing to try - Talon has good Python type annotation coverage and the type information is shipped with Talon. If you point VSCode at Talon's Python interpreter (at `~/.talon/bin/python` on Linux/Mac or `%appdata%\talon\.venv\scripts\python` on Windows), the language tooling in your editor should then know about Talon's API and types.


> There's a lot more than a few paragraphs on the wiki

You're right, I wasn't very fair to the wiki. I would have failed out way quicker without it. It wasn't intentional, it's just been a bit since I was looking at all this. My memory had dropped the details and left me with just my overall impressions.

That said, the wiki still was a bit short on some of the deeper details. I remember a fair number of places I'd feel like the wiki had just enough to tantalize the possibilities but not provide some of the important pieces to put it together into something useful.

And being community driven, I was a little wary of taking it as gospel. Even popular and stable software often has some niggling issues with community documentation.

> I think you should try asking questions on this particular Slack before you write it off

I'm not writing it off. I have no doubt it - and you - would be very helpful. I just don't consider it documentation, in the same way I don't consider a knowledgeable friend documentation.

Part of the reason I didn't pop in was that I was stuck doing a slow one-finger hunt-and-peck on a blank keyboard at the time. Searching the web was more than frustrating enough to make it real tempting to ignoring the cost of typing, never mind Slack. And partly I admit I just don't like asking questions in that format unless it seems like a reasonably complex/detailed/weird question.

> point VSCode at Talon's Python interpreter

Yeah, that probably would have helped. My setup at the time didn't have much for Python, so I was mostly using the REPL and basic searching.


It sounds like Serenade.ai would be more useful to you. It is more of a complete product, with the ability to add custom commands on top of it. Plus it's open source.

I'm not trying to steal the spotlight from Talon/Cursorless though, they are amazing and super useful. But it is definitely a much more fragmented ecosystem which requires more work.


Are people still maintaining Serenade? It looks like since they open sourced, it's mostly just been one person committing to it: https://github.com/serenadeai/serenade/commits/master


I think I looked at that briefly at one point. I passed by it quickly because it lacked support for both my usual editor and language.

But I've recently revisted VS Code and found it improved enough to be usable for me. My hands have also improved enough that I can do a reasonable amount of typing now too, so perhaps adding language support would be an option. Hm...

Thanks for the suggestion!


The whole ecosystem around Talon[0] is fantastic. You'll probably want to start with [1] if it's your first time, it's a community-maintained talon config which is a good starting point for customization.

The speech recognition engine built for it is great and the whole thing is free of charge, though I do recommend donating on Patreon [2].

Cursorless is a nice rethinking-from-the-ground kind of project. However, you can just as well keep using your current tools.

I've used Talon on and off with Jetbrains IDEs when I had some wrist pain and the overall experience was great, with my coding speed being fast enough without much practice. When voice typing, autocomplete is also very useful, so the introduction of Copilot was an additional huge improvemet.

But overall, the design of Talon, its config files, and the way this leads to very easy customizability, is really great. A bit like making your whole OS easily automatable and programmable (even if your OS isn't emacs /s).

[0]: https://talonvoice.com/

[1]: https://github.com/knausj85/knausj_talon

[2]: https://www.patreon.com/lunixbochs


I agree. Talon is incredible. Before I discovered it, I was convinced that my career would be ended early by RSI, as it was getting slightly worse every year. Meanwhile voice coding looked like impenetrable wizardry from the few videos that I had seen.

But after discovering it and getting through the learning curve (and there is one, that's probably the biggest reason why more people don't use it), it's something I look forward to using for the rest of my life. Even as my wrists start to gradually recover, being able to use voice/noise/eye tracking in tandem to the keyboard, allowing you to use the best for each purpose, is just so much more productive (not to mention, cool! fun!) that I doubt I'll ever go back.


This kind of work is also fantastic for those who have limited mobility or use of hands. On top of that, it's also a comforting peace of mind for the rest of us that should we ever e.g. lose our hands in a car collision, we might still have a successful path forward in our programming careers and passions.

Reminds me of a talk I once saw where a developer developed arthritis and couldn't program effectively anymore, so he developed an almost "verbal vim" plugin for vim (or was it emacs?) that allowed him to write/edit code quickly using verbal utterances. I forget what that project was, but I wonder how these two compare.


Are you thinking perhaps of this [1] PyCon talk by Tavis Rudd, who had RSI and a rotator cuff injury? He demos a voice command system that's all short nonsense syllables, including with a few humorous video clips interspersed, and then explains it after the demo.

[1] https://www.youtube.com/watch?v=8SkdfdXWYaI (demo starts 9 minutes in)


I was very inspired by a blind programmer who uses text to speech at inhuman speeds. https://www.parhamdoustdar.com/2016/03/27/autobiography-blin...

That's perhaps an extreme example of what is possible but it's incredible what people can adapt to.


Cursorless is indeed fantastic for this! I used it extensively last winter/spring when I couldn't type.


Seems cool. However, I went down a several-hour rabbit hole trying to build non-documented dependencies and eventually gave up.

The reliance on mono/skiasharp makes this a far less portable application, and to say there is Linux support seems disingenuous. Binaries only exist for Ubuntu, the binaries are said to work for non-ubuntu systems but nuget fails to install them.

Building from scratch leads to several undocumented dependencies and I'm still receiving increasingly inscrutable build failures out of the box. Feels like 2005... My patience ran out.

If anyone on an RPM-based system gets everything up and running, drop me a line.


You talking about Talon? It comes with all of the dependencies. Use ./run.sh to set the library path and ask on the Slack if there are any further issues.

(fwiw it's not skiasharp itself, it's just the mono/skia repo, which is C++)


Thanks for the tip. Any reason run.sh isn't mentioned anywhere in the documentation? https://talonvoice.com/docs/


Isn't it mentioned in the README in the tarball?


The tarball, talon-linux.tar.xz, didn't come with a README, neither under root or /resources.

I could certainly have broken run.sh open in vim (and should have), I could have looked into /lib and found the missing dependencies, but given that there was a binary literally called talon, the general expectation is this is the binary to be linked to my PATH and run directly. Using a run.sh is not necessary when the same could be achieved by making the talon file a shell script and renaming the binary to something else.


You can likely get support for many platforms with the unofficial nix expression for talon on the slack.


Very interesting. Editing code at the level of text is too low, ideally it would be raised to the level of thought. We don't think in terms of characters changing, we think in terms of higher level block constructs. It's not about the increased typing speed, it's about reducing cognitive load and making problem solving more enjoyable. Pairing with the Unison-lang style system of direct AST manipulation could be revolutionary, if the difficulty doesn't prove too high.


To anyone tempted to get into Talon: note that it is very much not open source - https://talonvoice.com/EULA.txt (except when you contibute code to them; in that case you implicitly license your code as open source, but theirs is still their property).


> very much not open source

I contribute quite a bit to open source as part of my work on Talon, including the speech engine, the tools I use for model training and dataset creation, and I've upstreamed security fixes and other improvements to core projects like CPython and LibUSB.

> except when you contribute code to them

That clause isn't for code contributed to the Talon app, it's so other users in the community can use ad hoc scripts posted to Slack. Talon itself does not use any code under that clause and I still recommend users explicitly license their community code.

I've had the open source conversation plenty of times, but basically it boils down to: it feels like the bare minimum to protect myself from hostile forks given I have been working on it full time for over five years now, give it away completely for free, and provide high quality support.

It's also segmented so all of the primary functionality (e.g. voice commands and their behaviors) are implemented in open source user controlled code, and I open source plenty of the components of Talon.

You might read the handsfreecoding review [1] which covers this a bit.

[1] https://handsfreecoding.org/2021/12/12/talon-in-depth-review...


> I contribute quite a bit to open source as part of my work on Talon

That's good and admirable! It doesn't change the fact that Talon is not open source, which makes it risky for users to come to depend on it.

I'm not trying to change your mind on whether Talon is open source. You can do whatever you want. The only reason I posted the comment was that I landed on your documentation site at first[0] and I mistook it for an open source project, only to find out later that I was mistaken.

[0] https://talonvoice.com/docs/


I am responding in this way because your top post heavily implies details that are not the case:

- "very much not open source": Talon has a lot of open-source code, including the users' entire custom voice user interface necessary to replicate their setup on a different app (if a compatible app were to exist).

- "except when you contribute code to them": This is simply not the case, users are sharing the code in question with each other as customizations in the community chat. None of that user code ends up "in Talon".

---

Please read the handsfreecoding post. I have mitigated the continuity risks (e.g. bus factor) of being closed source, and I am fundamentally against selling out. It is extremely important to me ideologically that Talon is free and will continue to be free.

Talon is the best option for a lot of people to use their computer without pain. I would not have quit my job and spent years full-time developing Talon under an open source model. To my knowledge I am the only person currently working full-time on this problem.

Please understand that if you convince someone to not use Talon for this reason, their alternative may not be "use some other software", it may be "computers continue to cause them literal pain".


I have read the handsfreecoding post now. It sounds like Talon is a really great tool for the people who need it and it is much closer to open source than ~any other proprietary software I am aware of. If anyone deserves criticism for releasing software that does not respect its users, you're far from front of the queue. It's still true that Talon is not open source, and there are some people who will reject it on those grounds (I am one of them). But that's OK, you don't owe us anything.

Sorry for the misunderstandings I posted in my first comment, they came from a quick reading of the EULA - unfortunately I can't edit that comment now.


Are you on Linux?


Yes.

Re your deleted comment about "hostile forks": allowing users to run whatever software they want is precisely the point of open source. It's not "hostile" to have different opinions to you.

The project looks cool nonetheless, but it is still true that if the users don't control the software, the software controls the users.


I'll repost that in sibling. I just realized that I'd rather have people saying "ah, typical of proprietary software to not support Linux" than "your work has no value because I can't compete with you using your own code". I'm not going to stop supporting Linux because of this, but it still seems like a backwards incentive.

This is a business, not a pet project. I work on it full time and it is my meager source of income. I'm not convinced it is possible to make much of a living on open source.

I do mean hostile when I say hostile. I have a particular project of mine in mind, where the fork renamed, split the user base, and lagged behind my changes. They later just reset to my HEAD (as I was making way more progress than the fork), then removed attribution in the docs and solicited donations for the work I had done. I burned out on that project shortly after.

> if the users don't control the software, the software controls the users

Talon is free, runs offline, cares about user privacy and security, and gives an incredible amount of control to the user. The entire voice interface and behavior is defined in the user's code. There are several open source projects in this space and none of them are anywhere close to Talon on making it easy to build and customize your own voice / hands free interface.


Just out of curiousity: would it be possible to set some kind of donation target, where you'd be okay with open sourcing Talon?

Maybe that would allow you to keep making a living from it as you currently are, as well as allow people to see the code at some point. (Note that I'm saying "see the code", as you could just open source it without accepting contributions.)

Personally I'd be more willing to pay for a product if I knew that my payments helped open source it.

Not saying you should, just wanted to mention it as a possible solution.


After you got me pointed in the right direction, I've been enjoying learning the ropes. I appreciate what you're doing and will subscribe to your Patreon if I find Talon to be a permanent addition to my workflow. Please don't stop supporting Linux :)


I do a lot to reduce context switching even on the small scale, and reducing how many times I switch to the mouse is one of those things. I also reduce screen switching, "alt-tabs" or workspace switching, for that I use a tiling window manager (i3). Since it's impossible to avoid web browsing for work, I also use Vimium to bring more keyboard into that process. I would consider my approach "low-mouse" rather than no mouse. Certainly not no-keyboard. Although not quite voice activation, if you can type then typing is concise and accurate, which makes it quick. Does anyone have any experience comparing this to Vim?

Especially for Vimium (chrome extension, not Vim), I consider it a superpower, because it feels a bit like telekinesis. I can just look at things and make them happen with my keys, no need to navigate toward them.


I wonder if there’s room for integration with Vim/Neovim or the plugins in that ecosystem - I feel like there’s a strong overlap between the “vim language” of text objects, motions, etc and the structure editing this project is working with.


We have plans for adding a keyboard interface [0] at least, though it's still an open discussion. If you're after straight Neovim support though, perhaps nvim-treesitter-textobjects [1] might tickle your fancy?

[0]: https://github.com/cursorless-dev/cursorless/issues/710

[1]: https://github.com/nvim-treesitter/nvim-treesitter-textobjec...


Imho text editing speed is not an issue for programmers.


It’s more complicated than that I think. You don’t spend much time typing, but how much does the time you spend (and the cognitive load of) typing deconstruct the mental architecture you’ve created to solve a problem?

This is at the core of why vim/others are still so popular. The micro-gains in typing efficiency translate to much more efficient problem solving because you don’t “lose your place” within your mental process. I think the same would extend to cursorless.


I find typing can be helpful. I think more about certain aspects of the problem as I type, so I often discover edge cases I hadn't thought about or potential performance pitfalls.

I guess it's a bit like navigating using only the map in the war room vs navigating on the ground without a map. The ideal route comes through a combination of both, and typing allows for the latter point of view.

Though after typing for a while, I'll take a step back and think about the overall solution again, going over it in my mind. Then back to typing.

That said, I gotta have a responsive editor.


And TDD is bad because it requires too much of typing /sarcasm


TDD is bad because it introduces distracting busywork. The actual typing isn't the actual problem.

In terms of just typing, I could probably produce something like 50,000 lines of code in a 40 hour work week in a moderately verbose programming language. In practice I'll probably crank out somewhere between 1000-5000.

It's the thinking that is slow. It's made even slower by adding additional tasks and context switching.


The average professional software engineer produces around 10 LOC per day.


I admit this a weird edge case, but a few years ago I fractured my left elbow and could only use one hand. When typing one handed, typing speed was a bottleneck.


The issue i have with this and will always with speech to text software. my voice will never pick it up unless im very clear and i take the time to prounouce a word or a chord?. I will have more luck writing in a stenograph keyboard or a chording keyboard then this. I like the fact that this technology assits for those who can have their voice be read by a computer. It will never work for me... Despite how modern a lot of this software has became.


You may want to look at your hardware setup. Modern (1-3 year old) laptops now generally have incredibly good echo cancellation microphone setups which massively improve recognition accuracy. If you are stuck with an older machine, try a USB based (integral ADC, the logical inverse of a DAC used for playing audio) lapel mic. These can be had off amazon for a tenner, but will infuriate you everytime you forget to take it off and get up from your chair... One last tip: My experience suggests the bluetooth microphone quality is not good enough for voice recognition tasks like this.


thank you for the suggestion. i will try to look into it. it just at the same i was in highschool the technology wasn't as good for my voice. so i just learned how to type faster. but im willing to give it a shot and i can be a like "beta tester" or contributer if needed. i tend to use wired stuff since i do music production and am very picky on how stuff comes out when im finally mixing them.


I find it annoying that they are not using the standard phoenetic alphabet


from their youtube comment:

> Good question. For efficiency, we try to keep commonly used terms such as letters as short as possible. Note that the words we use for letters all one syllable, whereas the NATO phonetic alphabet are mostly two syllables. The words we use for spelling are also chosen so that they can be chained together easily and quickly, eg “harp each look look odd”. Easy to say quickly without slurring


I think the main reason is too easily distinguish (to the program) between letters - and I guess to also have shorter words.


That's also the NATO phonetic alphabet's raison d'être, where a lot more's on the line; a lot more research time and money has been spent on it over the years.

Air, Jury, and Quench in particular strike me as ambiguous examples NATO would never go for. Krunch is just baffling. Plex breaks the scheme followed by every other (that it's the first letter in the word).


Talon's alphabet is extremely carefully designed. There's de bruijn sequence style redundant phonetic information in each letter allowing the single syllable words to be mashed together at high speed without pauses (even allows for a bit of slurring to fit more than one letter into a syllable). I can type 60wpm with it, see me spelling a last name around 0:40 in this video for typical usage [1]. It's also carefully tested (with an American accent) to not jam up your mouth / tongue when you say the letters in any order.

You can remap it to whatever you want, and a few users choose either NATO or a custom alphabet. But most stick with Talon's with minor changes (e.g. they might change a specific word if it's harder to say clearly in their accent, or if their mic or environment makes it more difficult to pick up specific sounds).

You can also use the English alphabet for spelling anywhere you can say a word, e.g. "say hello" and "say H E L L O" both work the same.

This isn't a voice assistant where you belt out a command once in a while then move on with your day, and this isn't a low fidelity radio link. These words may be said tens of thousands of times a day. Using three-syllable NATO words is slow and exhausting. It's common for voice programming systems to define their own alphabet based on a truncated NATO alphabet. I went further because I wanted to be able to use vim and play DCSS at very high speeds with my key commands instead of binding a bunch of app specific custom commands.

[1] https://twitter.com/lunixbochs/status/1378159234861264896


As someone who has voice coded full-time for a long period of time, that would get quite annoying.

Notice that some users even start using Talon's noise support. I even changed "scroll up" to "suh" because I said it 300 times per day.


Siri doesn't seem to have a problem just calling letters by their name. It would be nice if there was a setting to choose between the Talon alphabet, NATO phonetic alphabet, and regional standard letter names.


In my experience, the vast majority of people haven't even heard of the NATO alphabet, unless they've had exposure to certain industries which use it heavily, like military/aerospace.


But, before using this, all people are unfamiliar with this proprietary one. If you have to learn something..?


Looks great! I've been dreaming about something that lets me use a headband and reads brainwaves to interact with a computer for a long time. Every 4-5 years I look into what's been happening on that front, but so far it seems like we're not even close.


I am hoping for an eye-tracking mouse solution in the meantime, where looking at a portion of the screen and pressing a key on the keyboard will click rather than having to move a hand away from the keyboard to use a mouse, or wait for the analog stick mouse to reach the right point of the screen. Seems to be getting more plausible but still fairly expensive for a few years yet.


You've unlocked a memory. Probably 10-15 years ago, I tried a bunch of eye tracking apps and found one that somehow worked really well with my cheap USB webcam. I'm sure it needed to be calibrated first. It was classic developer software, snappy with bare-bones UI. Freeware, ran on Windows 2000, not sure if open source.

I wonder if I have it backed up or could track down what it was again. I was not much of a developer then, but these days it would be simple to wire up simulated mouse events.


I've built something similar along these lines. It's an app that uses your camera to track hand movement. As you move your hand through the air your cursor moves as well and to click you need to show a simple hand gesture.

My experience with eye trackers was similar. The clicking thing is partially solvable with eye tracker + foot pedal combination, but the biggest turn off with eye tracker was poor support for wider monitors or multiple monitor setup. Eye trackers also require you to be in a certain distance from it which was affecting my posture.

These were some of the reasons why I've built Cursorly https://cursorly.app/ It's still in the early stage but I'd love to hear some feedback. There is a free trial for a few days, but let me know if you want to extended it, I'd be happy to do so.


Talon has an eye tracking mouse, the necessary hardware runs around $250.


I would say that an eye tracking mouse is what OptiKey has, and for Talon it's more of an eye tracking+head pointing+voice clicking mouse. If you have tmj/vocal cords/neck problems you won't be able to use eye tracker as a mouse replacement.


> for Talon it's more of an eye tracking+head pointing+voice clicking mouse

> If you have tmj/vocal cords/neck problems you won't be able to use eye tracker as a mouse replacement.

I disagree with these assertions. There are several options in Talon for eye tracking. Optikey is a nice suite of functionality, but it's Windows-only, while Talon is cross platform. I also have a few users who use the Optikey UI but prefer to use Talon's eye tracking to control it. I've certainly recommended Optikey to users.

The updated direct control mouse mode in Talon 0.3 can be used without any head tracking at all as long as your targets aren't too small.

The zoom mode explicitly uses no head tracking as well.

To click or trigger the zoom mouse, you can make a pop noise, use voice commands, or a physical mouse button, or keyboard, or any keyboard emulating input device (such as a foot pedal), or use one of the dwell options.


Exactly. I hate focus-follows-mouse, but would love focus-follows-gaze.


There's an open issue for me to implement that: https://github.com/talonvoice/talon/issues/487


Very cool. Love to see how the support for different groups of people is considered in the language level!

It would also be interesting to test brain interfaces on this language some day.

I am thinking about alternative input method for my project Glicol (https://glicol.org/) quite some time. This work is definitely inspiring. I am also thinking about how can we better support the blind with some feedback. If you have some suggestions, please let me know here:

https://github.com/chaosprint/glicol


Combine this with something like Copilot and you get the 10x programmer :)


I may not ever use it, but I love that such innovative interfaces exist. Huge congratulations to the team


This is impressive. A bit like vi, but voiced.


How many wpm?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: