LLM coding agents can't learn from experience on our code, but we can learn from using them on our code, and in the context of our team and processes. I started creating some harnesses to help get more of what we want from these tools, and less of what we need to work too much on - eg, creating specialized agents to refactor code and test after it's been generated, and make it more in line with our standards, removing bogus tests, etc. The learning is embedded in the prompts for these agents.
I think that this approach can already get us pretty far. One thing I'm missing is tooling to make it easier to build automation on top of, eg, Claude Code, but I'm sure it's going to come (and I'm tempted to try vibe coding it; if only I had the time).
I think that this approach can already get us pretty far. One thing I'm missing is tooling to make it easier to build automation on top of, eg, Claude Code, but I'm sure it's going to come (and I'm tempted to try vibe coding it; if only I had the time).