Hacker News new | past | comments | ask | show | jobs | submit login

One of my first serious uses for GPT is summarizing the abstracts too.



I don't think it is seroud. Abstract are usually tends to be a short summary. In many cases it is a short paragraph. I was skimming on arxiv before I opened HN now and saw a couple of abstracts that are 4-5 sentences. Are ChatGPT going to write a one sentence summary or will the summary of the summary be comparable or longer?


Abstracts can also be close to a full page long. While this doesn't have to be bad, it's usually more information you're looking for (especially if you were only going to read the abstract anyway).


I can skim several full pages in the time it takes just to send off the abstract and get the response back...


Can you skim several full pages minus one and read the abstract last? If it's not effort you have to expend, you've won.


then batch summarise your paper feed every morning


I would prefer to optimise things that actually take a lot of time. It often takes me just a few seconds to eliminate a paper as irrelevant to my search based on its abstract.

If I can't, I'm gonna have to skim the paper anyway. But even that could be pretty quick.

Quickly reading something to gauge relevance is something I can confidently say I do much faster than GPT can.

And I don't have a paper feed. I look up papers relevant to what I'm working on at the time.


Length isn't always the indicator. Abstracts can be information dense, and sometimes more words is simpler to understand, or sometimes it just has jargon. For example, this paper on GPT-3:

"Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine- tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning. with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general."

The summary:

- If you train a computer on a lot of words, it can do things a lot better

- In this case, the computer has learned how to translate languages, answer questions, and unscramble words

- It still has trouble with some things

- This is very interesting because it shows computers can do things which are very hard.

So here, it drops a lot of information (cloze tasks), and it adds some (that last point). But now I know what the paper is about in 10 seconds.

I go back and see that oh, "a lot of words" really means 10x the previous. I'm now hooked on what problems it has trouble with, and what problems it solves for humans.

I didn't know a thing about LLMs when I first read this. If I tried to read it top to bottom, I'd get stuck on "task-agnostic, few-shot performance" then "state-of-the-art fine- tuning approaches" then "an autoregressive language model". They're big words, but turns out they're not the interesting parts, and understanding what the paper is excited about helped me to understand the basics.


1. What a terrible abstract. That abstract makes me hesitant to bother reading the paper at all. An overly long, overly detailed abstract is a sign of a bad(ly written) paper in my experience.

2. What a useless summary! This summary is so dumbed down it could describe literally any paper on LLMs. This would give me zero information on whether the paper is worth further reading.


Those have to be both some of the worst abstracts i've ever seen, one by a human and the other by an AI.


Also the abstract was a poor definition of the interesting points of the paper. This was one of the earlier papers that raised the risk and dangers of such a technology. People tend to say "AI is dangerous" but this actually defines where and how the harm comes in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: