Hacker News new | past | comments | ask | show | jobs | submit login

Meanwhile I have the opposite experience.

I have used chatgpt less and less, and bar copilot which is a useful autocomplete I just don't have much use for AI.

I know I'm not alone, and even though I've seen many people super excited by Dall-E first and chatgpt later they use very rarely both of them.






This is where I am with it now. I got bored with the image generators and tired of their plastic looking output.

I still use GPT or Claude occasionally but I find switching over to prompting breaks my mental flow so it’s only a net win for certain kinds of tasks and even there it’s not a huge step up from searching Stack Overflow.


For me it's been great for things like generating a latex version of my cv, or CSS for a web app. It's worth an OpenAI subscription for me, but I do wonder if it's worth all the energy and resources gone into making those GPU clusters and powering them.

The huge up-front energy costs of GPU clusters, both of making and operating them, is amortized over all subsequent uses of the models, i.e. inference. Inference itself is cheap per query. Your use cases aren't consuming that much energy. I feel the amount is in the same ballpark as what you'd use doing those things yourself.

As for whether it's worth it, I argue this is the single most useful application of GPUs right now, both economically and in terms of non-monetary value delivered to users.

(And training them is, IMO, by far the most valuable and interesting part of almost all creative works available online, but that's another discussion.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: