Skills are cool, but to me it's more of a design pattern / prompt engineering trick than something in need of a hard spec. You can even implement it in an MCP - I've been doing it for a while: "Before doing anything, search the skills MCP and read any relevant guides."
I get this sentiment, but I think it is why it is so powerful actually. It would be like calling Docker/containers just some shell scripts for a kernel feature. It may be conceptually simple, but that doesn't mean it isn't novel and could transform things.
I highly doubt we'll be talking about MCP next year. It is a pretty bad spec but we had to start somewhere.
I agree with you, but also I want to ask if I do understand this correctly: there was a paradigm in which we were aiming for Small Language Models to perform specific types of tasks, orchestrated by the LLM. That is what I perceived the MCP architecture came to standardize.
But here, it seems more like a diamond shape of information flow: the LLM processes the big task, then prompts are customized (not via LLM) with reference to the Skills, and then the customized prompt is fed yet again to the LLM.
I disagree. You wrap this up in a container / runtime spec. + package index and suddenly you’ve got an agent that can dynamically extend its capabilities based upon any skill that anybody has shared. Instead of `uv add foo` for Python packages you’ve got `skill add foo` for agent skills that the agent can run whenever they have a matching need.
Exactly! I don't think Skills is a new algorithm but it's definitely a new paradigm of organizing your prompt. Essentially, dynamic context assembling with stuff crossing user boundaries which. They even mention that they are working on skill sharing across teams in an organization. You can take this expand to global user base sharing things with each other in an agent.