Hacker News new | past | comments | ask | show | jobs | submit login

Am I the only one that's been disappointed with GPT for coding? It's good at using common libraries and frameworks but for stuff using more niche libraries it fails miserably almost every time, to the extent that by the time I've debugged and written prompts to try and get it to do it properly I've spent more time than needed

still good for handling a lot of grunge work and really useful for doing the stuff where I'm weaker as a "full stack" developer




I've so far found it almost entirely useless for code (mostly Scala, no big surprise there), but for suggesting tools / alternatives, big picture stuff it's come up with some interesting ideas.

If I was hacking Javascript or Python, especially gluing together common components, I'm sure I'd have a different experience.


I've started to use ChatGPT for semi high-level questions only this week, and I'm with you. It has hallucinated quite a few nonexistent functions, and it's generally unhelpful slightly more often than it is helpful. Just now I asked it why an `<input />` and a `<select />` with the same CSS had different heights (I've managed to avoid CSS for a while, shoot me XD), and gave it the CSS applied. It suggested the default browser styling was the culprit, and I should set the same height to both elements. I replied they already had the same height set. Then it suggested setting the same padding - they already had the same padding. Then it put `box-sizing` in an example and I promptly realized that `<input />` and `<select />` must have different default values for `box-sizing`. I asked if that was correct, and it said yup!

Based on what I've seen elsewhere, I really feel like it should've been able to answer this question directly. Overall this matches my experience so far this week. Not saying it's never useful, just regularly I expected it to be...better. Haven't had access to GPT-4 yet though, so I can't speak to it being better.


I remember asking it how to remove hidden files with rm and it hallucinated an -a option. Sometimes the hallucination makes more sense than reality.


Googling "why an <input /> and a <select /> with the same CSS had different heights" and pressing "I'm lucky" would give you the correct answer in seconds.

The AI is just a great waste of time in almost all cases I've tried so far. It's not even good at copy-pasting code…


I think of ChatGPT as being a “close example” of the final code that I want to produce. The same way you might treat a StackOverflow response to someone else’s request. Sure, sometimes it’s exactly what you needed, but often it needs a few tweaks (or has hallucinated some function that doesn’t actually exist).


I think it's pretty terrible for coding. I think it's very good for higher level designs. Even just getting something wrong from chatgpt is valuable because I can read it and understand why it's wrong and that understanding of what the solution missed is valuable in itself because I will make sure my solution accounts for it.


> useful for doing the stuff where I'm weaker as a "full stack" developer

I'm really excited about this part; I've been using it to help with DevOps stuff and it's been giving me so much more confidence in what I'm doing as well as helping me complete the task much quicker.


Have you double checked everything it told you as you should do with any AI generated output all the time as it's not reliable in any way?


Same. I was worried about my job at first - a chatbot being even occasionally factually correct was shocking - but the limitations became much clearer when its proffered solutions relied entirely on hallucinations.

Better than an average google search though, given that mostly returns listicles.


I'm on the same boat. Every time I asked it to provide C# code it gives me back crap. Nine out of 10 times it’s using non-existing libraries. The same thing happened the other day when I asked an SQL Query. I’m still on ChatGPT 3.5 though, perhaps 4 is way better.


Started with ChatGPT, migrated to BingChat, now onto Bard...added Copilot at the same time as I started using ChatGPT. Settled on Bard (has gotten increasingly better towards the end of the week) and Copilot

Using Copilot as a better intellisense...but don't use it for big templates. Bard to find/summarize stuff I figure is probably in stack/SO somewhere.

Boilerplate I think I have seen a 35% increase in speed. Individual edge cases (like porting Azure AD v1/Identity 2.0 to Azure AD v2/Identity 2.0 authentication) maybe 10-15% improvement. My day to day is C#.


I've found it to be really helpful with refactoring code but not solving an entire problem.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: