This is dangerous because people that has no knowledge of the science would blindly trust whatever it summarizes. There is no way to verify, an example is if you ask to summarize a book that you have understanding of the subject, at least you can sense some bs or open up the book at verify a few points. Here you would be at GPT3 mercy
I'm so tired of all discussions on LLMs starting with "this is dangerous" or forms of this. First because at some point this discourages people from even attempting cool things with LLMs and second because it really stalls the discussion. We are all aware that there is a hallucination issue in LLMs, so what do we do about this? What's your proposal? If it's just "don't do it", I don't think that's useful. I think if we were following the true spirit of HN, we would be giving suggestions on what to change. Just a suggestion to add disclaimers everywhere would be better than these "it's dangerous" comments. Not everything needs to be perfect, for a hobby project not everything even needs to be working. These comments are just discouraging for no real reason.
Edit:
[*] 'we' in my comment here is indicating the HN community, not entirety of humanity.
I feel the same way about “chainsaw juggling for babies” classes at daycare centres. People are so quick to jump to “that’s dangerously” or “ouch you cut off my arm” rather than engaging with the subject and suggesting ways the babies can be better taught to not decapitate their carers.
This is such a false equivalence. We are talking here about adults who want to build something cool. If we are going to define to scope of everything as "useful for everyone and 100% safe from the beginning" I think we'll get nothing done. If you think the risk here is equivalent, I really don't know what to say to you.
That is an opinion that you're absolutely entitled to have.
My own opinion is that the cats out of the bag so whatever's going to happen is going to happen wrt LLMs. But trying to shut down all criticism of a new thing just because you think it's cool is itself not cool.
And those little tots sure do look cute spinning the 'saws right?
Well that’s a thing to say. Not what I actually said but absolutely a valid position. Everything has pros and cons and it’s absolutely valid to ask if the cons outweigh the pros. To deny that is just silly.
Some of us respond to uses that align with the technology’s strengths with excitement and encouragement, and to uses that rely on its weaknesses with criticism and warning.
That seems useful, prudent, and completely in line with the spirit if a community like HN.
There’s no more reason that every critique should come with a “proposal” than that every cheer should come with some kind of admonition. As a community, multiple points of view are expressed and developed simultaneously.
Of course, some of points of view might personally frustrate you or leave you feeling like you don’t know how to respond to them. But is that so bad? Does it need to be squelched just because you don’t enjoy it?
I think you hit the nail on the head, it does frustrate me. Not only because it's repeated often, but also because it's applied equally everywhere. Look at this project for example. It's an extension for arXiv, a pre print repository for cutting edge research, do we really need to keep the entirety of humanity in mind while making this? Because that's what the original comment was saying. The way I see it, arXiv is mostly looked at by experts who would know when something is completely off, and will probably look into things further than reading the GPT summary. If I was still actively doing research and hitting the site every day, this would have immensely streamlined my flow (maybe with a few tweaks to the prompt to emphasize certain sections which I'm usually more interested in).
But that's not how any of this is discussed. That's where my scope comment comes from, the scope of every project cannot be "for everyone and 100% safe from the beginning". In this way there is no encouragement to discuss/make better things, just discouragement. I, personally, hate this.
I don't know man, maybe I'm wrong. I've been using HN for over a decade now, I've found constructive criticism to be a huge part of the positive culture here. It can of course be selection bias as I try to read more of that stuff and collapse a thread as soon as it gets into less constructive direction.
But on the other hand, this is what the guidelines say
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
I don't think these type of comments teach us anything. But these are guidelines and not rules for a reason.
> I don't think these type of comments teach us anything.
Comments like the one at the root of this thread teach people that the current/approximate generation of LLM’s are poorly suited for certain kinds of tasks and remind us that many people don’t yet seem to understand what their systemic limitations are.
LLM’s are statistical models over their training data and aim responses mostly towards the most dense, data-rich, and redundant centers of their corpus. Summarizing novel, esoteric or or expert material is something they’re poorly suited for because it inherently has poor representation in that data.
It's not like everyone on HN is not aware of these issues. There is an article about this almost every day. It would be constructive feedback, in my view at least, if they actually showed what's wrong. Basically why can it not useful even if an expert person will look at it. Targeting arXiv gives us the opportunity to assume certain things about our users.
If I were to suggest something it would be to wait for OpenAI to get their stuff together before creating a summarizing bot which uses a model it doesn’t own.
I don't think so. The average person already thinks (Chat)GPT is an all-knowing AGI homework-solver, and the problem only worsens if you add the airs of "science" to the situation.
If you have used the summary function of Gpt, it can be out right wrong but sound very plausible. With the amount of disinformation out there people that are interested in science but wants it the easy way is could make things worse. Imagine the summarized states the result show that certain meds give good results but without the right statistic context, it could be just marginally good, or even not statistically significant to the trained reader. Now they pass it off and start validating their own biases