If you're interested, I'd encourage you to implement an MCP integration and see if you change your mind.
For instance, I have a little 'software team in a box' tool. v1 integrated github and three different llms manually (react + python backend). This is fine. You can call github commands via CLI on the backend, and add functionality somewhat easily, depending on the LLM's knowledge.
Pain points -- if you want workflow to depend on multiple outputs from these pieces, (e.g. see that there's a pull request, and assess it, or see that a pull request is signed off on / merged, and update something) -- you must code most of these workflows manually.
v2, I wiped that out and have a simple git, github and architect MCP protocol written up. Now I can have claude as a sort of mastermind, and just tell it "here are all the things you can do, please XXX". It wipes out most of the custom workflow coding and lets me just tell Claude what I'd look to do -- on the backend, my non-LLM MCP server can deal with things it's good at, API calls, security checks, etc.
So is the MCP server acting like a middleman between the llm and the application you want to control?
Like, could I give the MCP server the ability to say exec Unix code on my machine and then tell the LLM "here's the MCP server, this function can execute Unix code and get back the response".
Then I can tell the LLM, "create an application using the MCP server that will listen to a GitHub webhook and git pull when the webhook hits and have it running" then the LLM would generate the commands necessary to do that and run them through the MCP server which just executed the Unix code. And viola?
I've gotten an llm to create files and run system commands for me.
“Implementation detail” is doing a lot of work in the second sentence, though. There are whole startups like langchain that were trying to build out a reasonable agent framework integrated in such a way that the LLMs can drive. MCP makes that really easy — LLM training just has to happen once, against MCP spec, and I get client and LLM support for an iterative tool use scenario right in the LLM.
For instance, I have a little 'software team in a box' tool. v1 integrated github and three different llms manually (react + python backend). This is fine. You can call github commands via CLI on the backend, and add functionality somewhat easily, depending on the LLM's knowledge.
Pain points -- if you want workflow to depend on multiple outputs from these pieces, (e.g. see that there's a pull request, and assess it, or see that a pull request is signed off on / merged, and update something) -- you must code most of these workflows manually.
v2, I wiped that out and have a simple git, github and architect MCP protocol written up. Now I can have claude as a sort of mastermind, and just tell it "here are all the things you can do, please XXX". It wipes out most of the custom workflow coding and lets me just tell Claude what I'd look to do -- on the backend, my non-LLM MCP server can deal with things it's good at, API calls, security checks, etc.