Hacker News new | past | comments | ask | show | jobs | submit login

Nope. Learn how to use it in almost everything you do. It’s a game changer.

LLMs aren’t AGI. They’re far it. But they have massive uses for reasoning on available context.

I’ll give you an example. I’m trying to set up some bulk monitoring for api across 200k jvm s. The api documents are horribly out of date. But I get the raw uri on the monitoring tools.

I can just get these uri, send them into chatgpt and ask for a swagger spec - along with a regular expression to match the uri to the swagger api. It figures out the path and query params from absolute paths.

Sure I could try to figure out how to do this programmatically using some graph or tree based algorithm. But chatgpt basically made it possible with a dump Python script.

Of course I may still need a person to fill in these. But just getting a swagger spec done for a thousands of services in an afternoon was awesome.




> Learn how to use it in almost everything you do. It’s a game changer.

This type of rhetoric is part of the reason so many compare the current crop of AI to cryptocurrency hype: proponents constantly telling others to shove the technical solution into everything, even where it’s not necessary or worse than the alternative.


I'll put it a little differently. It is of immense help if you really know what you're doing but want to do it faster.

I know where you're going. I've had folks say to me: "I really like co-pilot because it enables a beginner like me to write code". This sentiment often comes from folks having non-technical roles who want to create their own software solutions and not have to deal with engineers. I roll my eyes at that one.

You need to be able to spot specific areas of acceleration. Not just tackle it as a hammer for every problem.


I’m surprised you are able to get a whole script working as expected out of it. I’ve tried using chatgpt just for lines of code alone and its always coming up with a solution thats, I guess you could say is far to “creative” to be useful and often doesn’t end up doing what I expect of it when I go to test what its given me.


I had a very specific ask. I gave it the uri as comma separated values and asked for the swagger spec. There wasn’t scope for creativity.

I could also split the uri by service names. That helped parallelize my questions. It wasn’t just dump the data in. There was some cleanup behind the scenes that I had to do.


GPT-4 or GPT-3.5?

They're completely different products.


We need to stop propagating this nonsense. GPT-4 still messes up.


OP said they're different products, which is very true. Yes, GPT-4 still makes mistakes, but it's leagues ahead of 3.5.


It's not nonsense at all. The degree to which GPT-4 messes up is dramatically lower than 3.5.


The nonsense is the automatic conclusion that when ChatGPT messes up, it must be 3.x; which is demonstrably false.


But no one said that. GP just said they were different products. And that is true. It is meaningless to show an error that GPT 3.5 produced in a conversation about 4. They are separate products.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: