Hacker News new | past | comments | ask | show | jobs | submit login

I personally find it useful. As a CS researcher, if I want to delve into a new topic (for example, because a specific line of work comes out where I think my research could be applicable) I typically can start by directly reading the paper I'm interested in, and in the introduction I'll find a brief history and some references I need to look at for context.

It's much better than having to look through all the relevant papers in that topic in the last decades in order because they all assume knowledge of the past ones.

Of course, I know there are survey papers, and they can be very useful, but they're typically going to be too general and not specifically oriented to what you need to understand that specific paper you're interested in. Plus, you inherit the biases of whomever compiled the survey, which in my experience are often significant.




I rarely read the CS literature so I can't says much about that. From reading cheminformatics papers by CS authors, I see that their history section is often only incompletely understood.

For example, in paper I reviewed, the CS authors misread one a paper completely. They wrote something like "method X has been used in cheminformatics before [cite] but the CS literature has improved on that with method Y." But the paper they cited actually used method Y.

Now, the paper itself didn't need that level of detail about the history. That error, and others like it, bugged me because they came across as dilettantes, writing with more assurance than they actually had, and because their history was biased towards the CS methods they knew, which made it feel like they snubbed the cheminformatics methods and treated the previous work in this field as second-class material.

I can't help but think that the process you describe, where someone new to topic must write a history section just to publish, results in a lot of half-baked history, as new people just don't have the experience to really give a good treatment of the history. Instead, they'll see that 15 other papers covered points A-F so they follow the tradition that they need to cover points A-F but with a different slant.

I'm not saying my field is immune to that! There's a well-known observation along the lines that "similar structures tend to have similar properties." Many people will cite a 1990 as the source of that quote. Except that that book doesn't contain that quote. Most people instead know about it second- or third-hand, which has resulted in the common but incorrect practice of making that citation. It's a litmus test I use to tell if the authors really know their history.

Q: If you write multiple papers on a new topic, do you still write histories for each one? Or can you refer to your previous publications for the history?


What you say does happen, and it's an interesting perspective (I guess I had always seen the "half-baked history" sections as something annoying but inevitable). Pick your poison, I guess.

And the answer is that in general, we do write (short) histories for each paper, unless maybe in short conference papers (limited to 4 pages or so) while it's OK not to write any.


I've wondered for a while now how various funding and other constraints affect fields of science. In math, CS, or SWE it's easy to pick up a new topic, but in biology and chemistry people seem to have overwhelming tendency to work for decades at a time on singular problem areas. CS papers likely go over history because it tends to be more useful, with chemistry, all interested readers may have 5+ years in that field already.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: