Why must LLMs or “AI” beat or match the smartest and most capable humans to be considered to solve a real problem? There’s been a lot of technology invented and in widespread use that solves real problems without having human-like intelligence.
I have a bash script which is very similar to this, except instead of dumping it all into one file, it opens all the matched files as tabs in Zed. Since Zed's AI features let you dump all, or a subset, of open tabs into context, this works great. It gives me a chance to curate the context a little more. And what I'm working on is probably already in an open tab anyway.
I’m not a fan per se of TE, but I did get the OP-Z as a continuation of playing with the POs.
I still like it but I’m already trying to find something to eventually replace it with. But is there really something out there with similar size, features, and price as the OP-Z? I would like to find something.
i'd definitely prefer to buy an mc-101 over an OP-Z personally if you want something in the same sort of price range. if you can spend a bit more, i'd look into the dirtywave m8 if the workflow appeals to you. i have one and it's my favourite piece of audio hardware that i own.
of course virtually any computer with a DAW is the real best answer in terms of features and price, but i understand the urge to want to be away from a computer while creating.
I've been maintaining my own script that does something like this. A few things I've found to be useful: It can also pull in context from git history (prioritize based on which files have been most recently worked on, include recent git commit messages which can help the LLM know more about what's going on), and optionally it can go do multiple stages... for long files, first summarizing them, then including the summary in the final prompt.
Interesting! Did you publish it?
It’s a great idea to prioritize based on git history.
As for multiple stages, it means the tool itself is doing a few calls to the model. What do the code summaries look like? Just function and class doctrings? Getting the model to write summaries that are comprehensive enough to guide development but still more compact than the code itself seems like it may not be a trivial problem.
Comments are most helpful when they explain something that is not obvious by just looking at the code. Comments that just explain what the code plainly does don't add much value. This kind of technology can still be useful in other forms as other comments here note, but this kind of auto-commenting, I really don't see it catching on.