> One thing I’ve noticed is that different people get wildly different results with LLMs, so I suspect there’s some element of how you’re talking to them that affects the results.
Which is Fortuna's work... stochastic models are like that. And confirmation bias is another phenomenon as well as "how do LLMs align with my worldview" whether I see them more positively or more negatively.
Which is Fortuna's work... stochastic models are like that. And confirmation bias is another phenomenon as well as "how do LLMs align with my worldview" whether I see them more positively or more negatively.