Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think I'm settling on a "Gell-mann Amnesia" explanation of why people are so rabidly committed to the "acceptable veracity" of LLM output. When you don't know the facts, you're easily mislead by plausible-sounding analysis, and having been mislead -- a certain default prejudice to existing beliefs takes over. There's a significant asymmetry of effort in belief change vs. acquisition. I think there's also an ego-protection effect here too: if I have to change my belief then I was wrong.

There a socratically-minded people who are more addicted to that moment of belief change, and hence overall vastly more sceptical -- but I think this attitude is extremely marginal. And probably requires a lot of self-training to be properly inculcated into it.

In any case, with LLMs, people really seem to hate the idea that their beliefs about AI and their reliance of LLM output could be systematically mistaken. All the while, when shown output in an area of their expertise, realising immediately that its full of mistakes.

This, of course, makes LLMs a uniquely dangerous force in the health of our social knowledge-conductive processes.



You need to be pushing much more data in than you're getting out. 40k tokens of input can result in 400 actual quality tokens of output. Not giving enough input to work off of will result in regressed output.

It's basically like a funnel, which can also be used the other way around if the user is okay with quirky side effects. It feels like a lot of people are using the funnel the wrong way around and complaining that it's not working.


Sure, if you have a high-quality starting point and need refinement.

The issue is that the vast majority of user-facing LLM use cases are where people don't have these high-quality starting points. They don't have 40k tokens to make 400.


You can just attach 40k of context directly into the Gemini, ChatGPT and Claude web interfaces afaik. If someone is using an LLM as a tool to actually be of help in an area they are already professionals in, conjuring good books, research, etc, as attachments shouldn't be an issue.

But yes, the default mode of LLMs is usually a WikiHow and content farm style answer. This is also a problem with Google: The content you get back from generic searches will often be riddled with inaccuracies and massive generalizations.

Not being able / bothering to come up with relevant context and throwing the dice on the LLM being able to do this out of the box is definitely a serious issue. I really think that is where the discussion should be: Focused more on how people use these tools. Just like you can tell quite a bit about someone's expertise based on the specific way in which they interface with Google (or any information on the internet) while they work.


Bullshit works on lots of people. Seeming to be true, or even just plausible, is enough for most people. This is why powerful bullshit machines are dangerous tools.


If people were easy enough to convince that they had been deceived, then I'd not mind so much. It's the extraordinary lengths people will go to in order to protect the bullshit they acquired with far less scepticism. Genuinely wild leaps of logic, shallowness of reasoning, on-the-face-of-it non-sequiturs, claims offered as great defeaters which require only a single moment of reflection to see through.

This is the problem. The problem is how bullshit conscripts its dupes into this self-degradation and bad faith dialogue with others.

And of course, how there are mechanisms in society (LLMs now one of them) which correlate this self-degrading shallowness of reasoning -- so that all at once an expert is faced with millions of people with half-baked notions and a great desire to preserve them.


> It's the extraordinary lengths people will go to in order to protect the bullshit they acquired with far less scepticism

That's the narrative bias at play. We all are subject to it, and for good reason. People need stories to help maintain a stable mental equilibrium and a sense of identity. Knowledge that contradicts the stories that form the foundation of their understanding of the world can be destabilizing, which nobody wants.

Especially when they are facing struggle and stress, people will cling to their stories, even if a lie or deception in the story might be harming them. Religious cults and conspiracy theories are often built on this tendency, but so is culture in general.


I think there is a certain sort of person who, if not "wants" this destabilization, doesn't really experience the alternative. People primarily relating to the world through irony, say. So, characteristically, socrates (, some stand up comedians, and the like) who trade in aporia -- this feeling of destablization.


> I think there is a certain sort of person who, if not "wants" this destabilization, doesn't really experience the alternative.

I agree, but the ability/willingness to engage in that kind of destabilizing irony itself comes from a certain stability, where you can mess with the margins of your own stories' contradictions, without putting the core of your stories under threat.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: