I like to scan for new productivity apps when sitting in the library of my local University. It's crazy, but when sitting there for ~1 hour, 95% of all students open up ChatGPT (or similar) at least once.
Earlier today I asked Claude "What does (abbreviation) mean in (context)?". Ditto yesterday, different abbreviation, different context, but the AI did the same chore for me. Not crazy.
Claude's become quite good at delivering references for this kind of question, too.
shameless plug: I also got fed up with todo apps (and note-taking apps in general), so I built "Zettel"[1]. It's a simple piece of paper, but on your phone. It's amazing what you can get done with such a simple tool.
An unfortunate clash. I can say from experience that the sst version has a lot of issues that would benefit from more manpower, even though they are working hard. If only they could resolve their differences.
I’m definitely interested as well. This is the other side of the sst/charm ‘opencode-ai’ fork we’ve been expecting, and I can’t wait to see how they are differentiating. Talented teams on all sides, glad to see indie dev shops getting involved (guess you could include Warp or Sourcegraph here as well, though their funding models are quite different).
Crush (compared to OpenCode):
- Pro: Sexy UI with separate diff window and good information context
- Con: No SSO with Antropic. You need to generate an API key
- Con: No login with Github Copilot
- Con: Rly bad planning capabilities as agend. Acts awskwardly, executes single commands instead of batch commands.
- Con: Thus it is really slow.
- Con: Uses much more tokens for operations than OpenCode
Currently I would definately go with sst/opencode. Seems like crush is much more a beta.
One big benefit of opencode is that it lets you authenticate to GitHub Copilot. This lets you switch between all the various models Copilot supports, which is really nice.
Context engineering will be just another fad, like prompt engineering was. Once the context window problem is solved, nobody will be talking about it any more.
Also, for anyone working with LLMs right now, this is a pretty obvious concept and I'm surprised it's on top of HN.
The original GPT-3 was trained very differently than modern models like GPT-4. For example, the conversational structure of an assistant and user is now built into the models, whereas earlier versions were simply text completion models.
It's surprising that many people view the current AI and large language model advancements as a significant boost in raw intelligence. Instead, it appears to be driven by clever techniques (such as "thinking") and agents built on top of a foundation of simple text completion. Notably, the core text completion component itself hasn’t seen meaningful gains in efficiency or raw intelligence recently...
Hey, I’m part of the Kodus team. We built an open source code review agent. It’d be awesome if you gave it a try. Here’s the repo (https://github.com/kodustech/kodus-ai)