Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: AI Tool Is Now Supporting React, Angular, CSS, Svelte, Vue (webcrumbs.org)
97 points by m4rcxs 4 months ago | hide | past | favorite | 76 comments



Should we as developers put more effort into defending our craft? The movements in the artistic space with AI were very widespread and vocal, but we developers seem not to care. I feel like developers in general are a bit more quiet and timid with things and it leads to companies or entire industries taking advantage of us.

Am I the only one who feels like developers really need to be a bit more vocal in defending themselves, their craft, and even their sanity? Are we quiet because of the large salaries in the space?

I suppose the biggest question is how do you defend the craft but at the same time keep the advantage of automation and AI? (is it unions?)


Just leave it to run itself out.

Right now the sales pitch is "magical machine spits out any code you can imagine and it will all just work". This isn't the case now and I doubt it will be the case in the future.

It will get really good and make things faster and more easily, but you will still need to know how to use it to be effective and you will still need to understand the underlying tech to be the most effective.

It's like carpentry and powertools. Carpenters and slap together a whole house in a day, a big part being due to power tools. There are still carpenters who are all about the craft and traditional ways and they make amazing pieces of furniture and there are those who need to build houses, fast. Finally there are those who dip into carpentry and realise that it is a whole lot of new knowledge and a degree of understanding that was invisible to them before trying.

If you don't know what you're doing a power tool won't make you a great carpenter.


Depends on your motive. I enjoy the "art" of programming, but I enjoy even more using it to build cool stuff, make companies happy, start side projects or even businesses.

I've not seen any indication an AI legitimately replacing software engineering, rather enhancing it perhaps resulting in the need for fewer engineers for the same work. Even as an engineer that's a huge win.

The role is less coding, more engineering. I suppose if the "craft" is the coding part, you're right it may need defending, but to me that's not the craft


Totally in the same camp. How I see AI improvement- is that if you needed couple developers to launch something, now you can do it yourself. Meaning the cost of launch is lower and that I think should be appreciated not fighted against.

If developer is actually solving the problems, it's like giving a tractor to dig a hole in addition to shovel


> Should we as developers put more effort into defending our craft?

Fight against AI tools? You are not likely to win that battle.

Horse carriage manufacturers would not have stopped car makers, regardless of how hard they tried or unionized. You have to adapt.


Code doesn't really have the same 'human touch' the way art does. We've spent the past 20 years making libraries, higher level languages, UI frameworks, etc to make development faster. If there's any 'human touch' it's a lot higher level than just the lines of code. Art has mostly been the same for a very long time. Especially painting/drawing. For most people it's a passion and creative outlet. From what I've seen (some) AI investors say so far, they want to replace that creativity.


Art and technology used to be... the same. Almost nobody really wants the artistry that comes with well crafted code these days though. It's mostly just about being done.


And that applies to more that just code. Buildings, tools, cutlery, furniture, whatever -- everything used to have way more craftsmanship.


Are you talking about the end products or about the process of creating them? The GP talks about the craft of the woodworker, not about the furniture. Me as a customer don't really care how the knife was made, as long it's a good looking sharp and sturdy knife. Same way, the customer of the AI produced software again will only care whether the software addresses their needs and has good UX. And I fully believe AI tools will give that, rather sooner than later, even if with human help.


I don’t feel threatened by it, and I haven’t seen anything that warrants this reaction. Putting code into files is trivial entry-level stuff. Let me know when an automation flawlessly (as in 100% accuracy), consistently (not just once), and with truly arbitrary requests (not cherry-picked) pulls off systemic refactors and feature additions to one or more massive codebases full of poorly written, non-standard garbage. For the record, I find today’s AI tools useful, but they do not at all threaten my livelihood.


>Should we as developers put more effort into defending our craft?

No.

Nobody worried about optimizing compliers taking the jobs of assembly programmers.

Tools make us able to do _more_ not less. This is just one more. And it happens to be pretty good at doing obvious boring things that have been done a million times which most of us don't want to do anyway.

Like recently I needed to design a nice html error page, I don't have a designer to go to, and it wasn't really a big enough deal for that kind of thing anyway. Instead of having to dust off a bunch of web design skills and spend a long time figuring out fiddly little style sheet things... or just doing a terrible job... I asked AI to do it. 90% of the work was done in 10 seconds, then I spent 5 minutes polishing it. If I did it myself it would either have been a couple of hours to get something as nice or for the same 5 minutes I could have done a shit job. Nobody lost work because of AI that day, I was just able to do more important things than make an internal developer-facing error page look nice.


No, because

1. It's a fight you can't win

2. It might allow me to finally build the ideas I have. I have better things to do than striking a keyboard all day.

I can't wait for AI to replace me.


No, as except for an elite artistic few, the defenders of unproductive ways or even those who refuse to aggressively upskill, end up poor with few prospects.

This is especially true in an industry where anyone can jump in (unless we want to lock computers behind licences).

I want the high salary to continue, so I will move where the tools take me. AI let’s me generate a ton more features in the same amount of time.


Quite a few responses to your questions based on current state of AI.

Firstly, this is never going backwards. It will only get more capable. There will likely be algo changes that unlock new capabilities over time. There are definitely areas that humans have the advantage but this is similar to the “god of the gaps” concept in that the area where people have an advantage will reduce over time.

There’s currently no real understanding in the model and it’s really amazing what we can do with hyper-autocomplete. Humans made that happen. We’re the ones doing the innovation.

We’ve long been in the business of automating jobs away. This time it’s our own.

For the foreseeable future, get good at leveraging it and stay current.

(Intuition: AI is also good at the business layers. It can probably produce better specs than many (not all) people paid to do it. It can generate ideas and communicate them in many formats. It’s super confident, so could easily be a consultant. I don’t think the business analysts and strategy people should be too confident.)


I am not a developer, but AI helps me to translate my thoughts more fluidly to real world things. I am learning a lot about development but it allows me to think openly and vaguely into the GPT and iterate to my intent and goal. I love it. Embrace it and find out how it can make a clone of you so you can have an harem of AI alts that @rcconf(N) can be thrown at a particular problem of your choosing.

Imagine having a bunch of them as a bot swarm of AIs that you can have an orchestration layer upon which itself is one of them tuned to manage them.

"In Lak'ech" Mayan: I am another yourself.

That will be great.

Also when you can adopt expert personas from others that are personal AGI lego.


I think a crucial difference here is that images are (slightly) more defensible.

There are still some tells that indicate an image was AI created. It's enough to mark them as AI and discourage brands from using them, after pushback. At this point, the human difference is noticeable.

This isn't the case for code. No one can tell the difference between AI and human-generated code, not at the point at which users access it. And since there isn't there really isn't much to 'fight.'


I suspect that developers and luddites are almost entirely separate groups of people.


The best defense is to keep improving as programmers. By constantly refining our skills and learning new technologies we maintain our relevanse and value. AI and automation are tools that can enhance our work, not replace it.


Pretty sure every single profession affected by industrialisation said the same...

This is just machines for the knowledge economy.


The difference between AI art and AI code is that the latter can never be used for anything more than prototypes for someone who doesn't know how to code. For people who know how to code it becomes a super power.


I think developers just don't feel threatened by it, not yet at least. For software it's more of a "rising tide that lifts all boats".

I've yet to personally benefit from AI in any of my workflows, in any meaningful capacity, but I wouldn't complain if that changed.


If AI is building better software why stop it? If it's not, what's the threat? What's to defend? Our right to build worse software? Prevent people from using tools as they see fit? What's course of action would you take?


Losing my sweet, sweet TC is the threat.


that's not how it works in Capitalism. You don't tell people not to do things, nor that they are doing things wrong and only you know better. People are going to employ AI and it will be proven more useful than human devs, or not. Best defense is showing your value to customers.

Personally, I don't find "craft"/"art form" to be the best way to think about technology either. Scientists don't think what they are doing is art (I think), so why should engineers think like that.


How about Svelte 5 with runes mode?

One of the biggest challenges I've encountered so far while working on my SvelteKit with Svelte 5 codebase is that all frontier models struggle to understand differences between major versions of languages and frameworks, leading to a lot of annoying hallucinations. They're fantastic at writing code, but the last mile of figuring out how to fix issues for incorrect APIs or syntax becomes very tedious.


This is an enormous footgun that makes AI absolutely impossible to use in Godot. You get a ton of Godot 3 code output which really doesn’t resemble Godot 4 code at all, despite having the same name.


I'm a bit afraid that LLMs will make it harder for new frameworks and tools, as they have far fewer examples to learn from. And if that will make it so that the established frameworks remain status quo.


I think with regards to changes within existing frameworks, that would actually be kind of nice.

I really wish framework developers would stick with an existing decent solution longer instead of trying to release new versions with breaking changes in search of some kind of ideal API.

I actually prefer the syntax of Svelte 5 with runes over the previous one as it looks a little less magical to me, but I still wish they wouldn't release another major version and instead just focus on making Svelte 4 really solid. I felt the same about the React move from class components to hooks. I know these two examples come with backwards compatibility, but still would be nice to have just one way to do something and make it really solid and polished.


Food for thought: let's say AI (some day) delivers a working application, will it still matter which framework it is written? AI writes it, we complain about a problem, AI fixes it, we programmers are out of jobs (at least web application programmers) and the users get updates and finally working applications. I know we are not there yet, but at that imaginary later point, I think frameworks will be a thing of the past.


We are there - right now - and it’s about as predictably awful (or awfully predictable?) and exactly as the hype-men said: paraphrased, that it means everyone can now have their own Junior/Intern/Subcontractors to delegate “menial” programming work to.

Regrettably I don’t have the link saved on my iPad I’m using right now, but there’s a public GitHub repo where all commits are made by sone LLM-based agent with zero human intervention - IIRC its a React+NodeJS app (cliche as it is) - all commits are made in response to Issues/Tasks filed by human users but humans can’t touch the code themselves - I couldn't tell it it was/is a genuine experiment - or a glorified arts project…

But if it is a demonstration of what the state-of-the-art is, then from what I could tell it was a strange kind of managed-chaos: from what I remember seeing the codebase was a complete dog’s dinner: LLMs are great at dropping-in dozens of lines of code to a new - or existing method/class/function, but utterly hopeless at keeping the codebase coherent - and LLMs (just like so many subcontractors I’ve dealt with myself) never push-back against bad ideas. Even if an LLM/agent did decide to do some kind of code-cleanup, it’s easy to see how a jumble of glorified copilot addendums results in .js/.ts files far larger than their context-window could take).

…but the miracle was that this repo had tests - and the tests all passed! (I think, perhaps, any test-failures triggered an automatic prompting of LLMs to fix the tests? So that’s to be expected).

Now assuming that repo was actually using “real AI” (as opposed to Amazon’s retail computer-vision AI: “Actually Indians”) I don’t know what technique they used to stop hallucinations of nonexistent APIs from breaking everything.

If anyone else knows that repo, I’m interested to hear your thoughts.


Who is instructing the AI to make an application, fix bugs and issue updates? Is the customer doing all this or the manager taking time out of their day? Are either of those really fit to figure out whether the app actually meets what the customer needs?

Seems like the software engineering role is still needed at the higher level, and that would still likely require some sort of framework to help make sense of what the AI is generating, so you can instruct it accordingly.


75% of the time PMs don’t know or understand what the customer wants or needs either; at least an LLM roleplaying aa a product-owner would have been trained on a corpus including research-output on product-development and usability.


The future of AI-assisted coding is probably agents that can automatically test the code candidates they generate.

That should help mitigate the problem. If it tries to use the old API it just won't compile.


I work around that (with limited success) by prepending the latest docs (migration guides, how-tos) into the conversation. gpt-4o picks that up perfectly


That's because they don't "understand" any of it. I think the interesting space here is in getting the machinery around the model to correct the output before returning it to users, which is the sort of space I'm assuming this app plays it.


As a non-LLM, I got so confused with Svelte 5's runes, that I just avoided it altogether by not touching it haha


I asked for an 'interactive particle simulation' and got a big gray rectangle. Then I clicked regenerate and the page crashed.

To me it's not clear how I would debug the component, I can ask for modifications but not like add log statements, inspect variables, and give feedback on errors.


Welcome to the future!


Code LLM tools are more defensible when they work only by training on documentation and public domain code of the languages and frameworks.

But they tend to have also been trained on code under copyright, without license to do so.

Some code LLM tools vendors have had to put in 'safeguards', to try to avoid further embarrassing evidence of outright copying. That doesn't mean that they're not still often passing through obfuscated copying.

In real life, when someone is caught plagiarizing, such as copying a paragraph from a published work, and changing some words to fit, it's a career-ending scandal.

And other times, in a grayer area, of mechanical mashing up of multiple works, with the intent of "take these multiple copyrighted works, and mechanically combine them to my needs, in a way that's hard to explain to a judge, and forget about all the copyrights, so I can claim copyright". (Hey, if merely saying "it's an app" can smokescreen an illegal taxi service, hotel service, or rental price-fixing, just think how effective a shield even matrix multiplication is.)

In software, copying code without license, or being tainted by exposure to code when you're supposed to "cleanroom" it, are both already considered illegal or shady.

A lot of the current enthusiasm around generative AI feels a bit like the popularity of media piracy -- many people know, or have a nagging suspicion, that it's against norms or laws, but it's just so appealing, and everyone around you is doing it.

It also has the dynamic of the many exploiting the relative few creators, in a way that wasn't part of the social contract or laws, and which the creator doesn't want, but the many can simply take.

Especially when the many are armed and cheered on, by tool vendors, many of whom should know exactly what they're doing, but who want to win big from this nonconsensual exploitation of the works of others.

And, as the former head of Google recently advised at Stanford, just take it, get big money, and pay lawyers later.


I'm not having much success with this beyond very basic components. I think I'd have more success iteratively working with ChatGPT or Claude directly, and then pasting the code into jsfiddle, pastebin, or replit to see the results and iterate.


I understand your point, iterating with ChatGPT or Claude can be a great approach for many cases. Our focus with Webcrumbs is to speed up front-end component creation, especially for those looking for a quicker, more direct solution. But of course, there's always room for improvement! If you have specific suggestions, I'd be happy to hear them.


When exporting the code, all the field values are hard coded in the react/html and no easy way to get data from a rest/api.

Maybe i am missing something. I tried the trading template

Also: BUG report.

I asked for a complex field change. This gave me an orange error message "failed try again". But now there is no abort/cancel button. Only a retry button.

Good luck


I’m begging you to try out cursor and sonnet as the model.

It is incredible, and it genuinely feels like magic when it comes to codebase context.


Thank you, great idea! have you joined at the discord channel?


In this case is it not faster to just code the component yourself?


Interesting. I like that better developer tooling is coming out.

That said, this particular tool seems like it's trying to be very hand-holdy, maybe not targeting developers?

For developers Claude Dev seems like a more functional equivalent: https://github.com/saoudrizwan/claude-dev


Definitely look into cursor if you like Claude dev. It isn't apples to apples, but their chat mode is very similar.


This one? https://www.cursor.com/

Looks like it's not open source


Yeah, not open source.


Can you tell what is different from Claude Sonnet besides some frontend help? Can you tell a bit more about the stack / tech if it's more than just getting some LLM (gpt/clause) to come up with pages and rendering them?


Frontend-AI goes beyond just utilizing a standard LLM like GPT or Claude. While these models are powerful for generating content, our tool is specifically designed to streamline front-end development by providing specialized support for frameworks like React, Angular, Svelte, Vue, and CSS.


This works better than I expected. Orient your front page to give a less vague and more detailed example, because specific results are excellent. I am discouraged by the overuse of rounded corners, but encouraged by the correctly equalized white space and “in sample” choice of font sizes, paddings, etc. It does things well that programmers do poorly.

IMO the document layout folks and people building on top of that like at least 1 YC startup have a more promising path forward for whole-cloth complex component representations versus LLMs with robust prompts but no fine tuning.


I've been using AI models to do frontend design and candidly this doesn't hold a candle to just working directly with the base models.

With the base models you can ask: Hey, make me 5 designs that fit xyz requirements and it will go to work. If you already know front end design, you can take that up a level and pass it an emmet abbreviation of the rough code you want and say only use tailwind colors of sky, zinc, and blue.

I am decidedly not your audience, but if anyone on HN is thinking about how to speed up frontend dev and they are a frontend dev, I'd suggest just messing with the base models more.

Once you get the hang of it, it can be like working with a designer who's feelings don't get hurt... which I love.

"Meh, I don't like those designs, generate me 5 more and feel free to increase the model's temperature up quite a bit to see if you can come up with something cool."


I'm not really sure what you mean by "base models?"


Claude, GPT, Llama3.1 directly. The models are really capable if you prompt correctly.


They're likely referring to using the large language models (GPT, Claude, etc.) through their primary interfaces, because you get more control by interacting with the LLM directly.


how hard would it be have a Phoenix .html.heex mode instead of your typical react/angular? .html.heex is really just plain old HTML with tailwind css classes. I could easily copy and paste your generated UIs into my views and play around there! thanks!


You can export and even download a functioning .html! Does that help?


for sure, I guess I didn't see that UI button to export as html


Pricing model?


Free so far


Well. I guess the demo doesn’t work for react components even when prompted for them? Overall the output seemed best for static demo showcasing, it was all HTML


Try clicking "Choose Framework" or "Export Code". There you can find all the available formats to export =)


Gotcha, that wasn't obvious on mobile


The reason is that the mobile version doesn't include all the tools available in the desktop version. The mobile version is more of a 'light' or 'shortened' version, offering fewer features for a more streamlined experience.


All of the properties controls are bound to a network request. This is a big bummer. I don't like having to wait 400ms every time I change a color.


Approaching the singularity...since no human understands all those things.


is there a web framework that managed to significantly iterate on knockout or is it all needless fluff to spout more buzzwords every week?


Wow, look at that.


I can’t stand these “tools”. And I find it morally wrong trying to automate every little thing a developer does.


The goal of Frontend-AI isn't to fully automate the process, but rather to save you time by generating a strong starting point. As the code becomes more complex, it might require some refactoring, so developers still need to put in effort to refine and enhance it.


Consider it an improved version of auto complete in your IDE. It helps type pieces of code faster, it doesn't do any of the thinking involved in building a software product.


It isn’t an improved version of autocomplete though, it allows you to submit a design / image and output code. Narrowing the gap between designer and developer.


What is your argument exactly? What's morally wrong with automation?

Don't get me wrong. There are many good arguments against AI but claiming we don't need more automation is one I don't understand, so please enlighten me.


For me it boils down to the reduction of jobs, pay and displacement of workers. Industry is already over saturated and the evolution of automation has sped up greatly. This type of automation goes hand in hand with displacement. I don’t foresee some great enlightenment at the end of the tunnel here, I see the end of development as it looks today. And I don’t think it will have any resemblance of the past.


Everything will be worse, slower, more poorly designed and even well less understood?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: