I wonder what value gpl even has in a world where i can trivially reimplement whatever a company builds on a permissive license and does not share. I see still a place for things that are low level, algo heavy, real world test heavy and critical eg. kernel, cryptography, storage engine, filesystems all the rest of userland and web not so much
The claw cesspool boldly thinks they are smart for just making all these things and probably thinks they came up with novel ideas, when everyone who has the slightest clue what is going on is petrified. Its clear these concepts are going to happen at one point, but we don't even have an answer how to do half of this safely. The worst part is that clawcels will use these for “outreach” and “content”
Non-permissive licenses, open core and proprietary software will just not survive. There is no reality in which I or anyone in my community would use something like eg. raycast or the saas email clients that someone locks down and does rent extraction and top down decisions on. The experience of being able to change anything about the software i use with a prompt while using it is impossible to come back from to all the glitches, limitations and stupidities. we have to come to terms with infinite software.
Just what absolutely no one needed: another locked down and non web platform with horrific security that tries to digitally enslave people just the tiniest level above what they can accept now. I don’t see any future where raycast can survive and i would say its a good thing.
I understand some of the skepticism towards this product, but are you saying this will somehow negatively impact Raycast (the company)? Raycast the tool is incredibly useful, so I'm surprised to see this sentiment.
I am saying its as toxic as the main product of raycast and they got away with it in a world where people could not replicate apps and 100 plugins they use in days. There is zero possibility anyone i know will tolerate a locked ecosystems like this any longer than absolutely needed.
There is the same divide starting to form that NFTs had back in the day. Tech bros instantly like if something has claw in the name, the rest of us will dismiss anything with that naming and philosophy as toxic slop culture. will be interesting to see how far this one will go.
Its just another example and just a detail in the broader story: We cannot trust any model provider with any tooling or other non model layer on our machines or our servers. No browsers, no cli, no apps no whatever. There may not be alternatives to frontier models yet, but everything else we need to own as true open source trustable layer that works in our interest. This is the battle we can win.
Why don't people form cooperatives, contribute to buy serious hardware and colocate them in local data centers, and run good local models like GLM on them to share?
We are starting to! TBH it will take some time until this is feasible at larger scale but we are running a test for this model in one of my community groups.
This take is so incredibly short sighted. Sure mcp is not perfect and needs better tooling and a bit updated standards, but clis are >maybe< just the future for agents that are clis themselves but i would argue these agents will be not the mainstream future but a niche i call "low level system agents" or things for coding bros. An agent of the future needs to be way more secure, auditable, reasonable and controllable none of which is possible by slapping a cli with execution rights into a container even with a bubblewrap profile. An agent of the future will run in a sandbox similar to cloudflare workers/workerd isolate with capabilities. The default will be connecting one central MCP endpoint to an agent that runs in its own sandbox without direct access to the systems it works on. The MCP gateway handles all the things that matter, connecting LLM providers, tokens for APIs, enforcing policies, permission requests, logging, auditing, threat detection and also tools. Tools execute on the container level, so there is not even a need to change anything about any existing containerised workloads, its all transparently happening in the container realm. I am not saying system level agents have no use but any company running anything like kubernetes or docker compose will have zero need or tolerance for an agent like that.
Can we please not change the meaning of chat to mean agent interface? It was painful to see crypto suddenly meaning token instead if cryptography. Plus i really dont want to “chat” with ai. its a textual interface
Fair point, although I think we have OpenAI to blame for that - for buying chat.com and pointing it to the most popular textual AI interface of them all :)
Its an interesting direction if you see it under the umbrella of diminishing costs: You build a product once with vibe coding and a design/ product hat. Once you know what works you rebuild it 100% in a framework like this. You do this every time from scratch when the tech debt or the mismatch between architecture and needs are too big.
You could also use the same framework always - that's what I'm doing anyway. But you gotta remember that no matter how well you spec it, first iteration of the specs is going to suck anyway.
But you vibe-code it anyway and see what happens. You'll start noticing obvious issues that you can track back to something in spec.
Then you throw away the entire thing (the entire project!) and start from scratch. Repeat until you have something you like.
Incremental specing doesn't work though. You need a clean room approach with only important learnings from previous iterations. Otherwise agent will never pick a hard but correct path.
reply