Hacker News new | past | comments | ask | show | jobs | submit login

In your locality.

I've asked LLM systems "can you..." questions. I'm asking surely about their capability and allowed parameters of operation.

Apparently you think that means I'm brain damaged?




Surely there's better Windmills for you to tilt at.


For sure.

It's basically an observation on expectations wrt regional language differences. HAND.


LLMs are usually not aware of their true capabilities, so the answers you get back have a high probability of being hallucinated.


So far, they seem to be correct answers.

I assume it's more a part of explicitly programmed set of responses than it is a standard inference. But you're right that I should be cautious.

ChatGPT, for example, says it can retrieve URL contents (for RAG). When it does an inference it then shows a message indicating the retrieval is happening. In my very limited testing it has responded appropriately. Eg it can talk about what's on HN front page right now.

Similarly Claude.ai says it can't do such retrieval - except through API use? - and doesn't appear to do so either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: