eh, I haven't personally found a usecase for LLMs yet given the fact that you can't trust the output and it needs to be verified by a human (which might as well be just as time consuming/expensive as actually doing the task yourself)
I’d reconsider the “might as well just be as time consuming” thing. I see this argument about Copilot a lot, and it’s really wrong there, so it might be wrong here too.
Like, for most of the time I’m using it, Copilot saves me 30 seconds here and there and it takes me about a second to look at the line or two of code and go “yeah, that’s right”. It adds up, especially when I’m working with an unfamiliar language and forget which Collection type I’m going to need or something.
> Like, for most of the time I’m using it, Copilot saves me 30 seconds here and there and it takes me about a second to look at the line or two of code and go “yeah, that’s right”.
I've never used Copilot but I've tried to replace StackOverflow with ChatGPT. The difference is, the StackOverflow responses compile/are right. The ChatGPT responses will make up an API that doesn't exist. Major setback.
Thing is, you can't trust what you find on stack overflow or other sources either. And searching, reading documentation and so on takes a lot of time too.
I've personally been using it to explore using different libraries to produce charts. I managed to try out about 5 different libraries in a day with fairly advanced options for each using chatGPT.
I might have spent a day in the past just trying one and not to the same level of functionality.
So while it still took me a day, my final code was much better fitted to my problem with increased functionality. Not a time saver then for me but a quality enhancer and I learned a lot more too.
Maybe, maybe not. I get useful results from it, but it doesnt always work. And it's usually not quite what I'm looking for, so then I have to go digging around to find out how to tweak it. It all takes time and you do not get a working solution out of the box most of the time.
They're good for tasks where generation is hard but verification is easy. Things like "here I gesture at a vague concept that I don't know the name of, please tell me what the industry-standard term for this thing is" where figuring out the term is hard but looking up a term to see what it means is easy. "Create an accurate summary of this article" is another example - reading the article and the summary and verifying that they match may be easier than writing the summary yourself.
I've enjoyed using it for very small automation tasks. For instance, it helped me write scripts to take all my audiobooks with poor recording quality, split them into 59-minute chunks, and upload them to Adobe's free audio enhancement site to vastly improve the listening experience.
No? I use it all the time to help me, for example, read ML threads when I run into a term I don't immediately understand. I can do things like 'explain this at the level of a high school student'