Hacker News new | past | comments | ask | show | jobs | submit login

Have you seen this video? https://www.youtube.com/watch?v=9QZlQMpNk-M

I think the author is onto something – while AI might not be able to program per se, it can certainly be handed a code snippet and then use its huge corpus of Internet Learning™ to tell you things about it, code that looks like it, and ways (people on the Internet think) it might be solved better.

In that sense, it isn't replacing the programmer; it's replacing IDE autocomplete.




I really enjoyed the video, thanks for sharing.

I think the author is operating in what I consider to be the sweet spot of current LLMs - where I can ask a question I don't know the answer to but can reliably spot bullshit (either because I know enough or through other means). I think there's a lot of value to be had when those conditions are met, and not just for coding.


That is a good take on it. I have been saying this thing is like really good at summary and terrible on detail. So watch what it spits out.

Last night I sat down and tried using it to write an 8086 emulator. It got an simple emulation outline fairly quickly. But when it came to getting each of the instructions and interrupts correct. It fell very flat very quickly. What was interesting is that it made the exact same mistakes many early emulator writers make. You could then correct it and it would give it a shot at 'doing better'. I at one point got a bit bored with it and kept feeding it 'can you make that more compact/better'. It did an adequate job at that, eventually using templates and jump lists. It did not get very far using duffs device or dynrec but I am sure I could have guided it into doing that.

But probably for a majority of things like in emulation 'close enough is good enough'. That is an interesting finding of coding things up I think. This thing is also going to make seriously crazy amount of bugs we will be chasing for decades.


> replacing IDE autocomplete

Co-pilot has been very useful the times I've used it. It's not perfect, but does cover a lot of boiler plate. It also makes it much easier to jump between languages.


I’ve been working with copilot a few months and the biggest surprise is how it has led to much better commented code. I used to comment tricky code to explain it. Now I comment trivial code instead of writing it. Often two lines of comment will get me 10 lines of copiloted code, faster than I could have typed it and with good comments to boot.


> ways (people on the Internet think)

It reminds me of arguments that it's not the computer that plays chess, but its programmers.

You can describe a GPT's response as a statistical average of responses on the internet (for quite a contrived definition of average), but at some point it will be easier to describe it as analyzing a snippet and forming an opinion (based on what people on the Internet think). Are we past that point? I'm not sure yet, but we are close.


> quite a contrived definition of average

Sure, maybe an argmax mixed with an RNG isn't really an average, but some would say it's quite a contrived definition of forming an opinion.


argmax is doing all the heavy lifting there. Consider argmax is_proof_of_P_eq_NP(string)

You can describe solution of any well defined problem with argmax, it doesn't make it trivial.


I spent this afternoon asking ChatGPT (3.5 not 4) to help me query AWS resources into a csv. It gave me a 90% correct answer but made up a native csv output option. When I told it that option didn't exist it got almost sassy insisting it was correct. Eventually it gave me a closer answer using json and jq after I prodded it.

I had a similar experience asking it to write an API client. It wrote something very plausible but just concocted an endpoint that looked real but didn't exist.


This is how I use it. It is the best rubber duck money can buy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: