> They'll never be fit for purpose. They're a technological dead-end for anything like what people are usually throwing them at, IMO.
This comment is detached from reality. LLMs in general have been proven to be effective at even creating complete, fully working and fully featured projects from scratch. You need to provide the necessary context and use popular technologies with enough corpus to allow the LLM to know what to do. If one-shot approaches fail, a few iterations are all it takes to bridge the gap. I know that to be a fact because I do it on a daily basis.
> Cool. How many "complete, fully working" products have you released?
Fully featured? One, so far.
I also worked on small backing services, and a GUI application to visualize the data provided by a backing service.
I lost count of the number of API testing projects I vibe-coded. I have a few instruction files that help me vibecode API test suites from the OpenAPI specs. Postman collections work even better.
> If you are far from an expert in the field maybe you should refrain from commenting so strongly because some people here actually are experts.
Your opinion makes no sense. Your so called experts are claiming LLMs don't do vibecoding well. I, a non-expert, am quite able to vibecode my way into producing production-ready code. What conclusion are you hoping to draw from that? What do you think your experts' opinion will achieve? Will it suddenly delete the commits from LLMs and all the instruction prompts I put together? What point do you plan to make with your silly appeal to authority?
I repeat: non-experts are proving to be possible, practical, and even mundane what your so-called experts claim to not work. What do you plan to draw from that?
Do what I couldn't with these supposedly capable LLMs:
- A Wear OS version of Element X for Matrix protocol that works like Apple Watch's Walkie Talkie and Orion—push-to-talk, easily switching between conversations/channels, sending and playing back voice messages via the existing spec implementation so it works on all clients. Like Orion, need to be able to replay missed messages. Initiating and declining real-time calls. Bonus points for messaging, reactions and switching between conversations via a list.
- Dependencies/task relationships in Nextcloud Deck and Nextcloud Tasks, e.g., `blocking`, `blocked by`, `follows` with support for more than one of each. A filtered view to show what's currently actionable and hide what isn't so people aren't scrolling through enormous lists of tasks.
- WearOS version of Nextcloud Tasks/Deck in a single app.
- Nextcloud Notes on WearOS with feature parity to Google Keep.
- Implement portable identities in Matrix protocol.
- Implement P2P in Matrix protocol.
- Implement push-to-talk in Element for Matrix protocol ala Discord, e.g., hold a key or press a button and start speaking.
- Implement message archiving in Element for Matrix protocol ala WhatsApp where a message that has been archived no longer appears in the user's list of conversations, and is instead in an `Archived` area of the UI, but when a new message is received in it, it comes out of the Archive view. Archive status needs to sync between devices.
Open source the repo(s) and issue pull requests to the main projects, provide the prompts and do a proper writeup. Pull requests for project additions need to be accepted and it all needs to respect existing specs. Otherwise, it's just yet more hot air in the comments section. Tired of all this empty bragging. It's a LARP and waste of time.
As far as I'm concerned, it is all slop and not fit for purpose. Unwarranted breathless hype akin to crypto with zero substance and endless gimmicks and kidology to appeal to hacks.
Guarantee you can't meaningfully do any of the above and get it into public builds with an LLM, but would love to be proven wrong.
If they were so capable, it would be a revolution in FOSS, and yet anyone who heavily uses it produces a mix of inefficient, insecure, idiotic, bizarre code.
This comment is detached from reality. LLMs in general have been proven to be effective at even creating complete, fully working and fully featured projects from scratch. You need to provide the necessary context and use popular technologies with enough corpus to allow the LLM to know what to do. If one-shot approaches fail, a few iterations are all it takes to bridge the gap. I know that to be a fact because I do it on a daily basis.