Hacker News new | past | comments | ask | show | jobs | submit login

> What am I missing here?

I’m guessing you haven’t actually been using it personally beyond some superficial examples.

Once you use it regularly to solve real world technical problems it’s pretty huge deal and the only people so far that I’ve met who voice ideas similar to yours, just simply haven’t used it beyond asking it questions which it isn’t designed for.




Anything beyond one off asks is pretty hit or miss at least for me on if what ChatGPT is telling me is correct or not. Write me a complex SQL query that does this, write a python script that will do that, show me the regex that will find these patterns in a string, all of those work really nicely and do save time.

When anything gets more complex than that, I feel like the main value it provides is to see what direction it was trying to approach the problem from, seeing if that makes sense to you, and then asking it more about why it decided to do something.

This is definitely useful, but only if you know enough to keep it in check while you work on something, or worse if you think you know something more than you actually do, you can tell ChatGPT it's wrong and it will happily agree with you (even though it was correct in that case). I've tested both cases: correcting it when it was really wrong, and correcting it confidently when it was actually right. Both times it agreed that it was wrong and regenerated the answer it gave me.


> I've tested both cases: correcting it when it was really wrong, and correcting it confidently when it was actually right. Both times it agreed that it was wrong and regenerated the answer it gave me.

This is the peril of using what really is fundamentally an autocomplete engine, albeit an extremely powerful one, as a knowledge engine. In fact, RLHF favors this outcome strongly; if the human says "this is right", the human doing the rating is very unlikely to uprate responses where the neural net insists they're still wrong. The network weights are absolutely going to get pushed in the direction of responses that agree with the human.


The "just autocomplete" view is incorrect. I have actually had it push back on me when I incorrectly said that it was wrong.


I second this. It's been immensely useful to me, even with the occasional fabrications.


I wonder if anyone has a favorite pointer or two to their favorite real world examples.


> I am using Linux and trying to set up a systemd service that needs X to display a UI through VNC, how can I get the X authorization token in my systemd file

> I'm using python and my string may contain file paths inside, for example: (...) , For anything that looks like a filepath inside the string we should replace its full path, for example (..)

> Can you write me a python script to kill processes in Windows that no longer belong to a process named "gitlab"

> I want to write a test for it that mocks both sentry_sdk and LoggingIntegration so that I can give my own mocks during test

> I want to create a Python script

It should be able to be called from the command like, like below (example)

Write me the script

;;;

All real examples from last week that took me 1 minute to be solved instead of of googling or creating from scratch / thinking about it


Asking questions that it's not designed for? Which would those be?


What is the meaning of life?

When will humans live in space?

Why am I depressed?

When will world war III happen?

Compute this math equation (function calling and compute engines will help with this)


If you are really claiming that these are the questions that people are asking when they say ChatGPT isn't useful, then...that is an unbelievably blatant straw man.


I’ve seen non-technical people ask the strangest questions that really don’t have a problem they are trying to solve or brainstorm. They think it’s just a game or fun joke tool and want to try and get it to say something silly.

The technical people who reject it are quite curious psychologically, my personal suspicion is they are threatened by it. They get hung up on small hallucinations and then almost get giddy when it produces something “wrong” in some way. I don’t understand why they fail to understand it’s crazy importance, I mean it’s read everything and without an agenda other that the material it was feed, no twisted incentives. They things I’ve had it do with me are mind blowing, my guess is the people who understand it and how to leverage it will increase their own productivity so much that it will reshape economies and put many people out of work that don’t learn how to use it.

Definitely the most revolutionary development of my life, as I approach 50 and have been coding since 12, and believe it or not but working professionally coding since I was 19. Internet and iphone have nothing on this development with LLMs.


> They things I’ve had it do with me are mind blowing, my guess is the people who understand it and how to leverage it will increase their own productivity so much that it will reshape economies and put many people out of work that don’t learn how to use it.

This is the kind of hyper sensationalism that I'm talking about. Do you really believe that, or is this you extrapolating to what could be possible in the future if the technology keeps improving? I feel like that is where a lot of the arguments always ended up with crypto advocates as well, if you had doubts or questions about how big of paradigm shift this was going to be for the world, you just didn't get it yet because you couldn't connect the dots this early on.

I'm not doubting that the tool is useful, or that ChatGPT is quite an accomplishment, but I just don't see it "reshaping economies" anytime soon.


> This is the kind of hyper sensationalism that I'm talking about. Do you really believe that, or is this you extrapolating to what could be possible in the future if the technology keeps improving?

I don’t just believe it I already see it happening directly. I have made changes in hiring strategies and employment situations based on massive increases in productivity from using LLMs.

The difference in productivity of developers that embrace the new tools vs those who don’t is very obvious in my opinion. It’s probably the next 18-24 months when the impact becomes more obvious on larger scales.


Yep totally agreed. I have very junior developers doing complex tasks with the aid of it — for example sql queries , elasticsearch, ansible, react — all without having touched them before




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: