Hacker News new | past | comments | ask | show | jobs | submit login

If you can't tell if I'm being sarcastic about an LLM asking clarifying questions, you are possibly not great at using LLMs.

Prompting isn't necessarily the career some people wanted was sold as, but it's not a bad idea to practice a bit and build a sense of what a clear and effective prompt looks like.

-

To be clear, I get telling people it's a "you" problem every time an issue with LLMs comes up isn't helpful... but sometimes the disconnect between someone's claimed experience, and what little most people actually can agree LLMs are capable of is so great that it must be PEBCAK.

I just tried the original checkpoint of GPT 3.5 Turbo and it was able to handle drilling up and down in specificity as appropriate with the prompt: "I need you to help me remember a movie. Ask clarifying questions as we go along."






I think it depends a lot on what llm and tooling you have access to.

I had great results prototyping agents at work for specific tasks and answering in a specific style including asking appropriate clarification questions.

But at home with just free/local options? I've nowhere near the same settings to play with and only had very mixed results. I couldn't get most models to follow simple instructions at all.


Nobody has more PEBKAC problems than me. It might be my experience is colored by older models. I can give it another shot. But I do use LLMs quite a lot and got decent and surprising functionality out of them in the past. I was just consistently vexed by this one thing when doing information seeking search -like activity.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: