Hacker News new | past | comments | ask | show | jobs | submit login
Oscar, an open-source contributor agent architecture (googlesource.com)
175 points by theptip 49 days ago | hide | past | favorite | 23 comments



I feel this pain as a solo maintainer of a somewhat popular open source project. More users brings more questions in GitHub issues and Discord.

That’s one reason I added interactive help [0] to aider. Users can type `/help <question>` in the app to get AI help, based on all of aider’s docs. I’ve been considering turning it into a bot for GitHub issues and discord.

Dosu [1] is another app in this “triage issues” space that looks really good when I encounter it in the wild.

[0] https://aider.chat/docs/troubleshooting/support.html

[1] https://dosu.dev/


I handle this by redirecting all questions (anything that isn't a feature request or a bug report) to the project's community forum (hosted on discourse). Then I wrote and deployed https://github.com/pierotofy/issuewhiz to automatically catch and close issues that are identified as questions. GitHub issues is not the place to ask questions, in my opinion.

Here's an example of it in action: https://github.com/LibreTranslate/LibreTranslate/issues/632

It's been working pretty well.


Thank you for using discourse! It is sooooo much better than discord. I really, really can't stand discord. Searching for an issue or topic is like wading into the ocean looking for a bluegill.


Have you tried to apply for GitHub Sponsors? Kinda surprised to see aider.chat is solo maintained. I keep seeing it as one of the best at what it does. Though I haven't tried it out much. I'm waiting for NixOS packaging of it [0]. Note someone wants aider bad enough they put a bounty on it getting packaged in nixpkgs.

- [0] https://github.com/NixOS/nixpkgs/issues/330726


I think it would be much more effective if "/help <question>" just redirected users to a search engine. That won't find stuff on Discord of course, which is a good reason to not use Discord at all.

No need to use some elaborate AI solution for such a simple (and long-time solved) problem.


The interactive help has the full context of the user's current aider session: what they're working on in the coding chat, active config settings, their OS & shell, etc.

Aider can tailor the help response to exactly their question and situation. It also always links to relevant docs in these help responses.

I've pasted doc links in reply to user questions in discord, and I've pasted /help output. The latter is far more helpful, and includes the links as a bonus.


Have you actually used a frontier LLM for this usecase? It’s quite different from a search engine.

In the case where someone asked your exact precise question including follow-ups, you get the same (or better) results from search.

In most cases for a small project, no one has answered your precise question, so you might need to read docs and figure it out. It’s often much quicker to use an LLM here (though sometimes less accurate than the hypothetical matching search result).

As a benchmark, try using an LLM vs. search for answering questions about Nix. I find it to be 10-100x more efficient than searching and reading through the docs (including hallucinations and time spent corroborating, since usually I can validate the answer by running some command). This is perhaps a cherry-pick for LLMs, but it should illustrate clearly why “just use search” is missing the value prop completely.


Search engines are not as good, and in my experience search engines don't help _at all_ with questions, unless the question already exists online (and has been answered).

The moment your question is more than 5 words long you can forget about it.


I think they should expand the scope to direct contribution for easy issues (right now Oscar seems mostly for surfacing project information to contributors), I've had a lot of luck using aider + sonnet for direct contribution, and I'm pretty sure you could do it at scale for "getting-started" issues.

[1] Github bot that does direct contributions https://github.com/tscircuit/bunaider [2] Example contribution https://github.com/tscircuit/checks/pull/8


Dear god I want off Mr Bone's Wild Ride. I can't wait for even more pull request spam, now fully automated.


The ride never ends.


> > Oscar differs from many development-focused uses of LLMs by not trying to augment or displace the code writing process at all. After all, writing code is the fun part of writing software.


I think it's good that they're focusing on all the not-code parts of contributing, as you say there are other projects for code contributions.


We also tried to implement a GH Issue to PR workflow in patchwork - https://github.com/patched-codes/patchwork

It is a bit hard to get it to work reliably except for small changes.


It sounds like the indexer is go-specific, while aigen and others are going multilingual via treesitter (I believe)

I'm curious if there are any projects folks like for the indexing/embedding step, like Git repo -> vector index / graph index of code + comments + docs?

(I am not as interested in the RAG, LLM, UX, etc after)


We have a patchflow that does that in patchwork - https://github.com/patched-codes/patchwork it is called ResolveIssue - https://github.com/patched-codes/patchwork/tree/main/patchwo...


afaict patchwork also sits on top of treesitter for indexing ? https://github.com/patched-codes/patchwork/blob/b24a3ee07040...


Yes, we also use treesitter.


Very interesting to see if they are successful, and/or if this type of maintainer load reducing could be adapted to something like ubuntu/debian where maintaining packages is quite time consuming.


The value of Debian/Ubuntu is that there are people vetting and packaging and testing software. If you remove that and have AI do it, there isn't much left.


This agent isn't claiming to displace that work; more like act as the contribution equivalent of Tier 1 Support in a customer-service setting (where the actual project maintainers would then be Tier 2 Support).

Such agents would mostly provide guidance to jumping through the hoops necessary to get your PR in a state where it's mergeable according to the project's contribution guidelines — removing the need for humans to be that guide. (In other words, they'd act as a "compiler for your PR" that you could iterate on, with good English-language error messages. IMHO something that should already exist locally — but git has no local reified concept of PRs, so this is hard.)

But I would expect that in almost every case, the project's human maintainers would still eventually step in to review the PR, after the bot seems happy with it.

---

Mind you, in theory, for a particular set of limited-scope (but frequently occurring) problems, Tier 1 Support agents are usually empowered to solve those problems directly for a customer. And likewise, there could potentially be a set of limited-scope contribution types that a Tier 1 Project Contribution Auditor would be able to directly approve. Things like, say, fixing typos in doc comments (gated by the bot determining that the diff increases semantic validity of the text by some weird LLM "parseability" metric.)


I am not sure what you are talking about, doesn't sound like Debian/Ubuntu package maintenance.


We have a few large open source projects already using patchwork to help manage issues and PRs - https://github.com/patched-codes/patchwork




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: