Hacker News new | past | comments | ask | show | jobs | submit login

You're right I have no coding skill. But testing out Lovable and bringing my idea to reality made me realize this is something I want to learn so I've already began taking a course learning how to code softwares of my own.

People should't be "scared" of these LLM's it's just a tool that shows coding to a wider audience.






That's a really positive outcome, one I am personally supportive of. Learning to code is a rewarding journey.

Now, while I am not scared of LLM's, I am scared for users who use them inappropriately.

I use LLM's extensively, and so I am intimately familiar with the dangers they pose to the uninitiated. I would HEAVILY caution against relying on LLM's until you can read and understand the code your asking LLM's to write comfortably.

Personally, I would recommend you first learn to code in a language of interest, then use LLM's to automate the stuff that has become second nature. The stuff you can pump out mindlessly. This takes the burden of monotonous tasks of your hands, and you have the expertise to check the LLM output for glaring issues. It's still not fully automated but it's much faster if you can write something complex, critical, or sensitive, while the LLM churns out boiler plate and routine chunks. You then comeback later and proof read the LLM output.

Trusting AI code you yourself don't understand is a recipe for disaster. You claim your users data will be private, but then have to rely on AI jank to keep this data safe, if it is even safe. It might just throw everything into publicly accessible folders. What happens when you promise safety, but don't actually provide any. What happens when a users data is then stolen? Who does the court hold accountable? you? the LLM you blindly trusted?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: