Hacker News new | past | comments | ask | show | jobs | submit login

How about Svelte 5 with runes mode?

One of the biggest challenges I've encountered so far while working on my SvelteKit with Svelte 5 codebase is that all frontier models struggle to understand differences between major versions of languages and frameworks, leading to a lot of annoying hallucinations. They're fantastic at writing code, but the last mile of figuring out how to fix issues for incorrect APIs or syntax becomes very tedious.




This is an enormous footgun that makes AI absolutely impossible to use in Godot. You get a ton of Godot 3 code output which really doesn’t resemble Godot 4 code at all, despite having the same name.


I'm a bit afraid that LLMs will make it harder for new frameworks and tools, as they have far fewer examples to learn from. And if that will make it so that the established frameworks remain status quo.


I think with regards to changes within existing frameworks, that would actually be kind of nice.

I really wish framework developers would stick with an existing decent solution longer instead of trying to release new versions with breaking changes in search of some kind of ideal API.

I actually prefer the syntax of Svelte 5 with runes over the previous one as it looks a little less magical to me, but I still wish they wouldn't release another major version and instead just focus on making Svelte 4 really solid. I felt the same about the React move from class components to hooks. I know these two examples come with backwards compatibility, but still would be nice to have just one way to do something and make it really solid and polished.


Food for thought: let's say AI (some day) delivers a working application, will it still matter which framework it is written? AI writes it, we complain about a problem, AI fixes it, we programmers are out of jobs (at least web application programmers) and the users get updates and finally working applications. I know we are not there yet, but at that imaginary later point, I think frameworks will be a thing of the past.


We are there - right now - and it’s about as predictably awful (or awfully predictable?) and exactly as the hype-men said: paraphrased, that it means everyone can now have their own Junior/Intern/Subcontractors to delegate “menial” programming work to.

Regrettably I don’t have the link saved on my iPad I’m using right now, but there’s a public GitHub repo where all commits are made by sone LLM-based agent with zero human intervention - IIRC its a React+NodeJS app (cliche as it is) - all commits are made in response to Issues/Tasks filed by human users but humans can’t touch the code themselves - I couldn't tell it it was/is a genuine experiment - or a glorified arts project…

But if it is a demonstration of what the state-of-the-art is, then from what I could tell it was a strange kind of managed-chaos: from what I remember seeing the codebase was a complete dog’s dinner: LLMs are great at dropping-in dozens of lines of code to a new - or existing method/class/function, but utterly hopeless at keeping the codebase coherent - and LLMs (just like so many subcontractors I’ve dealt with myself) never push-back against bad ideas. Even if an LLM/agent did decide to do some kind of code-cleanup, it’s easy to see how a jumble of glorified copilot addendums results in .js/.ts files far larger than their context-window could take).

…but the miracle was that this repo had tests - and the tests all passed! (I think, perhaps, any test-failures triggered an automatic prompting of LLMs to fix the tests? So that’s to be expected).

Now assuming that repo was actually using “real AI” (as opposed to Amazon’s retail computer-vision AI: “Actually Indians”) I don’t know what technique they used to stop hallucinations of nonexistent APIs from breaking everything.

If anyone else knows that repo, I’m interested to hear your thoughts.


Who is instructing the AI to make an application, fix bugs and issue updates? Is the customer doing all this or the manager taking time out of their day? Are either of those really fit to figure out whether the app actually meets what the customer needs?

Seems like the software engineering role is still needed at the higher level, and that would still likely require some sort of framework to help make sense of what the AI is generating, so you can instruct it accordingly.


75% of the time PMs don’t know or understand what the customer wants or needs either; at least an LLM roleplaying aa a product-owner would have been trained on a corpus including research-output on product-development and usability.


The future of AI-assisted coding is probably agents that can automatically test the code candidates they generate.

That should help mitigate the problem. If it tries to use the old API it just won't compile.


I work around that (with limited success) by prepending the latest docs (migration guides, how-tos) into the conversation. gpt-4o picks that up perfectly


That's because they don't "understand" any of it. I think the interesting space here is in getting the machinery around the model to correct the output before returning it to users, which is the sort of space I'm assuming this app plays it.


As a non-LLM, I got so confused with Svelte 5's runes, that I just avoided it altogether by not touching it haha




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: