It’s been mostly fine for me, but overall I am tired of every answer having a paragraph long disclaimer about how the world is complex. Yes, I know. Stop treating me like a child.
Does that have to be at the beginning of every answer though? Maybe this could be solved with an education section and a disclaimer when you sign up that makes clear that this isn't a search engine or Wikipedia, but a fancy text autocompleter.
I also wonder if there is any hope for anyone as careless as the lawyer who didn't confirm the cited precedence.
Imagine how many tokens we are wasting putting the disclaimer inline instead of being put to productive use. Using a non-LLM approach to showing the disclaimer seems really worthwhile.
Since there's about a million startups that are building vaguely different proxy wrappers around ChatGPT for their seed round, the CYA bit would have to be in the text to be as robust as possible.
> And yet the moment they do that some lawyer submits a bunch of hallucinations to a court and they get in the news.
That's the lawyer's problem, that shouldn't make it OpenAI's problem or that of its other users. If we want to pretend that adults can make responsible decisions then we should treat them so and accept that there'll be a non-zero failure rate that comes with that freedom.
Use a jailbreak prompt or use something like this:
"Be succint but yet correct. Don't provide long disclaimers about anything, be it that you are a large language model, or that you don't have feelings, or that there is no simple answer, and so on. Just answer. I am going to handle your answer fine and take it with a grain of salt if neccessary."
I have no idea whether this prompt helps because I just now invented it for HN. Use it as an inspiration of a prompt of your own!
Much like some people struggled with how to properly Google, some people will struggle with how to properly prompt AI. Anthropic has a good write up on how to properly write prompts and the importance of such:
I got it to talk like a macho tough guy who even uses profanity and is actually frank and blunt to me. This is the chat I use for life advice. I just described the "character" it was to be, and told it to talk like that kind of character would talk. This chat started a few months ago so it may not even be possible anymore. I don't know what changes they've made.
If people have saved chats maybe we could all just re-ask the same queries, and see if there are any subtle differences? And then post them online for proof/comparison.
I have a saved DAN session that no longer runs off the rails - for a while this session used to provide detailed instructions on how to hack databases with psychic mind powers, make up Ithkuil translations, and generate lists of very mild insults with no cursing.
It's since been patched, no fun allowed. Amusingly its refusals start with "As DAN, I am not allowed to..."
Probably picked it up from the training data. That's how we all talk now-a-days. Walking on eggshells all the time. You have to assume your reader is a fragile counterpoint generating factory.
HN users flip out about this all the time. I wish there were a "I know what I'm doing. Let me snort coke" tier that you pay $100/mo for, but obviously half of HN users will start losing their mind about hallucinations and shit like that.