Hacker News new | past | comments | ask | show | jobs | submit login
How to seriously read a scientific paper (sciencemag.org)
289 points by NalNezumi on Nov 4, 2020 | hide | past | favorite | 90 comments



By the time I was in year 4 of my PhD, reading a paper mostly involved: look at title, directly look at figure 1 and figure 8 to judge if the title actually matches the result. If the result is of genuine interest then read the abstract, and then go through the figures carefully. Otherwise for the most part the reminder of the paper is left alone. This only applies for papers where I already know the field and am only trying to gauge the incremental addition to the knowledge the paper makes. The style will be different for a paper in an unknown field, there it's best to just read it as prose from top to bottom with a marker at hand.

To add more - not reading any text in the paper makes you laser focussed on trying to figure out what the data means without the authors trying to sugar coat anything with their perspective/agenda.


I've had many people suggest just reading the figures, but I've found that most scientists hide their sins in the methods section, and the figures cannot be properly interpreted without careful inspection of the methods (which often demonstrates the authors didn't really do a good job).

Also, I've noticed that a very large range of papers with faked image data in figures has gone unnoticed by most readers. People look at the figures hoping to see what they want to see and aren't critical enough about the process used to generate them (when I wrote my phd thesis, all figures were programmatically generated by version-controlled code on well-managed data.


I agree with this. The results and discussion are usually the least interesting parts and figures are usually meaningless without seeing how the data used to generate those figures were collected.

The methodology tells you exactly what the people who wrote the paper did. It's where you can see whether they used a sample size of 10 vs 1000, what methods they used to sample their data, the accuracy levels they used in analyzing their data, it's also how a study may be replicated.

Without reading the methods, you can't be sure about any other thing in the paper.

That's the problem i see with a lot of news reporting on science papers. They use the gp's method of reading. Abstract, images and maybe results if they're really trying.

That's how you end up with a lot of sensationalist, contradictory science headlines.


It is good you labeled the stage you were (4th year PhD), the article misses this. I would say in the 1st year of my PhD the path was more like this: read the paper slowly top to bottom, understand nothing, now look for the part which looks the most interesting/approachable, read that again. Go to the citations of this section and repeat the process (recursive algorithm). Take note of any papers beiing cited by most of the papers, invest time in this paper. Look for papers citing the paper you read (not possible for cutting edge of course). This process takes probably 2-3 month. Then do own research on that topic, then go back to the paper and now things are clearer. Do the same process with a related paper in the field. Then one more. Probable now you are in year two and one can transition to your approach.


Bugger me, that describes the process I (not a PhD) have gone through trying to digest some papers so well it is eerie.

I've been damned lucky. The worst paper, the one were I had to read most of the citations and quite a few of their citations before I got it was http://conferences.sigcomm.org/sigcomm/1997/papers/p011.ps and yes it took over a month before all the pieces settled into the right corners of my mind. Perhaps more accurately, it took months for my mind to create the right corners for the concepts to settle into.

But that was fine - I knew before I started it was the seminal paper on the subject, and so it would be worth whatever effort it took. The idea of wasting that inordinate amount of time going down that path with one dud after another makes me shudder in horror.


Did you read the link to "Adam Ruben’s tongue-in-cheek column"? It had me laughing, and is much more like the process you described.


Now I read it. Actually I think it is not 'tongue-in-cheek' (apart from making light of a hard thing).


Can you please share the link? I reached his homepage but no dice: http://adamruben.net/


It's the first hyperlink in the text of the linked-to article, going to http://www.sciencemag.org/careers/2016/01/how-read-scientifi... .


Looking now at various definitions of 'tongue-in-cheek', I think you're right.


Haven't done a PhD but took some time off from work to read/research on my own and this year 1 PhD resonate well with me. Read top to bottom of a paper, feel the despair (feeling dumb) and look for part that I understand.

The process was utterly a time waste except for when the paper in question was a survey paper.


Indeed! Even after a decade you still have to do this for any new field, though you feel less anxious about it! I loathe and love journal club topics that make me do this!


cautionary tale from my CS desk - first dozen papers, just what was described above.. next, complete actual work using the knowledge - hey! I belong here! .. next, read a few more and decide "I can do this" and collect three or six dozen additional papers from the reference notes, new discoveries, and latest pubs, mix them all in the same collection of PDF (!)

now you have sixty+ complex papers, at least a third of which are not actually very important, useful or thorough.. and where are those original, carefully chosen ten you started with ?

side note - the "focus on the figures" reading advice does not scale, since most search is first and foremost with text. Which of the now-eighty papers (and growing, the field is hot) are the ones you cared about and understand.. ? "piled higher and deeper - PhD" indeed!


As a layman my impression is most papers are too verbose. At least the ones that I can understand. After reading tons of steganography papers (for example) I found they tend to begin with the same retelling of the history of steganography from the beginning of time up to now, even though it's completely irrelevant to the topic of the paper (which is not history).


While that is true, it also helps people new to a scientific area. I can read 3-5 papers on a topic and the first chapters of each will give me enough to understand the gist of the topic, the status quo.

It's a question of who you prioritize while writing a paper. Experts or the interested layman.

If you prioritize people who are already experts in the topic, you write little to no introduction, you get to the results quickly.

If you prioritize the "curious, interested layman" (or university students in their first years), a short introduction to a topic with references will provide enough information to understand the basics and the reader can continue reading the rest of the paper with enough context to understand why the topic is relevant.


The way I write a paper for "curious, interested layman" is different than the way I write for experts. It's hard to keep both in mind when writing.

Laymen, for example, might need some figures to help understand a topic that experts internalized years ago.

I prefer having occasional well-written review papers, meant for getting non-experts up to speed. Then the domain experts - who are often not experts at writing for laymen - can refer people to those review papers, possibly also with a history delta for what's new since the review.

"For a comprehensive review, see" https://scholar.google.com/scholar?q=%22for+a+comprehensive+...


There's an added difficulty of writing to "tangential experts". I've had papers in the past that were 'tweeners, between two related but different fields. Depending on the audience, background information was needed that may be tedious for the other group.


Does content like this ever get published in two different journals, for different audiences, with different intro/lit reviews?


As far as I know, not really. Most journals require you to not submit to another journal until they've made a decision to reject so you can't submit to multiple. Even though they may have different intros, they’re presenting the same data.


> While that is true, it also helps people new to a scientific area.

That's what surveys are for.


I also have the impression that CS papers all too often recapitulate the topic's history. I don't see the point. Other fields seem to leave it as "for a review of the topic see", cite someone(s) else, and get on with the paper.

My field of cheminformatics, while CS-adjacent, inherits more strongly from the chemistry traditions. The occasional outsider papers from a CS department generally cause my eyes to glaze over if they follow that history-retelling CS tradition.


I personally find it useful. As a CS researcher, if I want to delve into a new topic (for example, because a specific line of work comes out where I think my research could be applicable) I typically can start by directly reading the paper I'm interested in, and in the introduction I'll find a brief history and some references I need to look at for context.

It's much better than having to look through all the relevant papers in that topic in the last decades in order because they all assume knowledge of the past ones.

Of course, I know there are survey papers, and they can be very useful, but they're typically going to be too general and not specifically oriented to what you need to understand that specific paper you're interested in. Plus, you inherit the biases of whomever compiled the survey, which in my experience are often significant.


I rarely read the CS literature so I can't says much about that. From reading cheminformatics papers by CS authors, I see that their history section is often only incompletely understood.

For example, in paper I reviewed, the CS authors misread one a paper completely. They wrote something like "method X has been used in cheminformatics before [cite] but the CS literature has improved on that with method Y." But the paper they cited actually used method Y.

Now, the paper itself didn't need that level of detail about the history. That error, and others like it, bugged me because they came across as dilettantes, writing with more assurance than they actually had, and because their history was biased towards the CS methods they knew, which made it feel like they snubbed the cheminformatics methods and treated the previous work in this field as second-class material.

I can't help but think that the process you describe, where someone new to topic must write a history section just to publish, results in a lot of half-baked history, as new people just don't have the experience to really give a good treatment of the history. Instead, they'll see that 15 other papers covered points A-F so they follow the tradition that they need to cover points A-F but with a different slant.

I'm not saying my field is immune to that! There's a well-known observation along the lines that "similar structures tend to have similar properties." Many people will cite a 1990 as the source of that quote. Except that that book doesn't contain that quote. Most people instead know about it second- or third-hand, which has resulted in the common but incorrect practice of making that citation. It's a litmus test I use to tell if the authors really know their history.

Q: If you write multiple papers on a new topic, do you still write histories for each one? Or can you refer to your previous publications for the history?


What you say does happen, and it's an interesting perspective (I guess I had always seen the "half-baked history" sections as something annoying but inevitable). Pick your poison, I guess.

And the answer is that in general, we do write (short) histories for each paper, unless maybe in short conference papers (limited to 4 pages or so) while it's OK not to write any.


I've wondered for a while now how various funding and other constraints affect fields of science. In math, CS, or SWE it's easy to pick up a new topic, but in biology and chemistry people seem to have overwhelming tendency to work for decades at a time on singular problem areas. CS papers likely go over history because it tends to be more useful, with chemistry, all interested readers may have 5+ years in that field already.


> I also have the impression that CS papers all too often recapitulate the topic's history.

It's not really about history so much as context. Usually, you want to set up a paper with "This is the state of X as it exists right now, but there exists this problem Y. We solve Y by starting from X and making Z advancement."

I think part of the issue is CS is exceptionally young (less than 100 years total, really), and the other part is how rapidly it's advanced and diversified in that time (in terms of individual disciplines even within subfields). Without the context, I can't readily jump to a paper from an adjacent-but-not-directly-relevant subfield without needing to look up a bunch of other stuff. And it's not as straightforward to know where to look to find the relevant information. A paper 10 years old might be state-of-the-art or it might be completely outdated, and it's hard to know which if you're not immersed in that discipline. Having the context in the paper itself is a big help to ameliorating this.


In biology the intro is typically not too long and generally gives just enough info so that even if a person was just browsing the journal they can get enough context about the field and topic and get a list of pre read references they can look back to if needed. The perfect introduction would ideally give you an introduction and a reading list that by itself would bring you up to speed to where the paper is starting from, without any need for second level reference digging!


I can attest to this fact. I've noticed often that papers in biology (systems biology in my experience at least) go directly to the point with some amount of context and history leading upto the main results of the paper. This is something that's suprisingly lacking in engineering, where reading a paper or grasping the context usually requires at least some amount of prior learning.


That section is helpful for researchers both to contextualize the paper and to leave pointers to other papers the current one builds on. You’re right it’s not directly about the method in the current paper but it’s an important section nonetheless, even if it’s a bit meta.


> most papers are too verbose...After reading tons of steganography papers

Is it possible they were secretly reporting something else too?


Agreed. Although the order i learned was title, figures, materials and methods. Sometimes there's useful stuff in the materials and methods, and sometimes that's where the bodies are buried.


At least in bio papers, the good ones make sure to give the most important method details in the legend itself so it's rare I have to refer the methods section. But yes, that does happen! Especially anything where they cure cancer in mice (the running joke is that you can also cure cancer in mice by stomping them)


You forgot to mention reading the Related Works section to make sure you're referenced.



Ironic: if you can’t read a paper yet, how do you read the paper explaining how to read a paper?


You bootstrap by reading meta-scientific papers.


I was going to say "write ignorant comments on social media about the paper based solely on the title" but your approach sounds more useful.


You must be fun at parties.


One of the best ways into a new area is if you can find a well written PhD or MSc thesis in it. Finding that isn't always easy, but if they are well done the background introductory material will be approachable and much more complete than a paper, and the bibliography should be useful.


You make assumptions, just when you learned your first language. Then you check if everything makes sense. If it does, then probably your assumptions were correct.


I think authors also taught about that and wrote the paper for newbies. Almost every new researcher is reading from top to bottom.


Posted this paper long ago on HN too. It is probably the most consistent information on how to read a paper.

The sciencemag article is nice read after this paper though, since there's a couple of small trips&tricks that is not in that paper (such as the emotional side of reading a paper, how to go on when you actually don't understand something/jargons, when to give up/look for better paper, etc)


The part about finding a better paper is growing more important by the day. The arXiv is filling up with resume-padding garbage.


This a variation of: https://en.wikipedia.org/wiki/How_to_Read_a_Book

Different ends but similar means (multiple passes)


Thanks for this! The format of the interview is clearly not aimed at clarifying how to (seriously) read scientific papers, with contradicting views among the interviewees.


First read the abstract, if it doesn't confirm your preconceptions, try conclusions. If that doesn't confirm your preconceptions try results. If that too failed, then find a nitpick in methods. If even that fails, then move from the denial to anger stage of grief. If you are unable to eventually make it through all 5 stages, then refer to the quip about science advancing one funeral at a time.


> then refer to the quip about science advancing one funeral at a time.

Heard another one about airline safety guidelines written in blood.


> I like to print out the paper and highlight the most relevant information, so on a quick rescan I can be reminded of the major points.

I used to do this but I've tried to move to exclusively digital. I always, after some period of time, end up throwing out the papers I've printed. Plus digital allows for easier access across devices, quick searches, etc. Still though, I enjoy reading printed papers more.

Curious if anyone else has a similar experience with print vs digital.


I've been trying to make all-digital work for years. I feel like there are significant downsides with digital that make me want to go back to paper (but if I were to print out everything I read, my small apartment would be overflowing in a year and it seems so wasteful).

One of the most obvious benefits to paper that comes to mind is the ability to lay multiple non-sequential pages (from the same paper and from other papers) side-by-side. That spatial aspect isn't replicable afaict and I definitely feel like it limits me in many ways.


Completely agree on both points. Another thing for me is just grabbing a paper and a pen and knowing that's all I can do for the next X minutes. No digital distractions possible.


It's pretty replicable - you just open the same document multiple times ! (Of course it's often easier to run out of screen space than out of desk space…)


It falls apart in how you interact with the pages. It's not even close to the same thing, so not replicable at all.


I've had this, and still does. I prefer print any time of the day. The compromise I do is that I make a .txt summary file of the article, after I've thoroughly read the paper, where I write down all the notes I made on the print (+ a summary) and store it digitally.

It takes extra time, and might feel wasteful, but I usually do it some days/week after I read the paper, for a spaced repetition purpose too.


I find when I highlight it's for two things: 1) Noting key points to come back to and 2) important concepts I'm not as familiar on as I could be, usually unconsciously it comes up in the form of a question or some sort of information that is breaks exceptions.

Instead of highlighting I take notes. I write down a list of key points, and I write down a list of topics I'm not as familiar I'd like to be with, including breaking expectations.

I go further and I google around investigating this list learning more on the topics beyond what the paper has to say.

This style may or may not be ideal for you, but it works well for me. Highlighting has always, for me, been not very helpful when compared to taking notes and getting a more solid understanding by combining sources of information on the topic when possible.


I'm using Xournal++ and tablet pen input to write annotations on the pdf, might be doable if you have a pen tablet and equivalent app. OCRing the notes for searching is too fiddly though ..


whats your workflow for being 100% digital?


A mix of things that are constantly changing. I like Polar for reading, storing, and annotating them - but I'm considering moving away from it. I also use Zotero for storing meta data about the papers because it has a really great automatic meta data retrieval [0]. It's also good for searching, sorting and what not based on this info.

My current thoughts are to move more to Zotero because it also has some annotation functionality, but primarily use it as an extraction point to then store these annotations/notes in plain text or markdown. Which then I'll manage with git or something.

I've personally realized it's more important to me to have my notes about the papers easily accessible (hence plain text in git) than to have PDF annotations stored in an app alongside the PDF. I've also realized this has been a little bit of a shift in how I read, making me more focused on extracting information. Not sure how I feel about this yet.

[0] - https://www.zotero.org/support/retrieve_pdf_metadata


For annotations with zotero, I don't even bother and annotate with the system PDF viewer directly onto the pdf file (preview in my case). Keeps things together all the same for syncing across devices and the annotations are platform independent since it's right on the pdf file. If I need a clean copy I can grab the doi from zotero and download another in like 30 seconds.


Yeah, exactly what I'm moving towards. I've also started using the mdnotes plugin [0] to extract the annotations to markdown if I desire.

[0] - https://github.com/argenos/zotero-mdnotes


My favorite papers are those that clearly explain their thought process, show what they did to a great level of detail, and in a clean and easy to read format. These papers can be used as a sort of make shift tutorial, which can be incredibly helpful when studying an unfamiliar topic. (I actually like them better than tutorials, because I get some perspective into their thinking and how they came to explore the topic which makes it easier to generalize the lesson.)

Not just reading papers, but Wikipedia as well: When exploring a new domain of knowledge, you may not know much or any of the vocabulary being used. This turns the average person off. I see this as the most common hurdle faced by those self studying a new topic.

The trick is to slow down, look at the different vocabulary you don't recognize, and then look them up. What sometimes happens is to learn this new vocabulary you'll be met with a wall of even more unfamiliar vocabulary. This turns into a dependency chain of unfamiliar topics. Looking up those words / dependencies recursively, eventually you'll get to familiar knowledge. You can start building up from that place of familiarity and before you know it you've picked up not only the basics of a new domain of knowledge but the more advanced topics build up to what you were interested in.

Learning the prerequisites usually takes hours, but sometimes this process can take days. Taking days to learn a dependency chain to learn an advanced topic seems like a long time, but in comparison to taking a class on the topic or reading a text book, this style of learning is highly accelerated. Sometimes there is no alternative learning style if you need to parse a paper on a cutting edge unfamiliar topic not yet covered in text books, and imo if you pace yourself and value learning over any other goal, it can be quite a relaxing and enjoyable process. Reading papers is a lot of fun.

When I'm learning new topics, I take notes. My goal is to maintain what I learn, not have it go in one ear and out the other, so why not slow down, relax, and take notes? Better yet if I come back to the same topic months later I can look over my notes. If I forgot something about the topic, I know my study style could be improved. This meta-learning feedback loop can be super helpful if you value and enjoy learning new topics (which you do or you wouldn't be on YC).


Pretty good advice for a college undergrad. Thanks!


yw ^_^


The problem here are the writers, not the readers. Most people are just not good writers and good at communicating complex ideas clearly. But the obsession with publishing research in journal articles format means there are a lot of reluctant writers writing on unengaging platforms.


I think Asimov explored this idea a little in one of his short stories.

I think there should be scientists and science writers. These writers should be employed by research institutions. Their job should be to understand the concepts and explain them in simple words.

I feel this would give the science world a lot more accessibility.

My problems with science papers have always been that the statistics have been written incomprehensibly (no layman knows what a p-number is) and that they repeat themselves so much that half the paper is just nothing


My usual approach is to look for an abstract. If I see one, that means I've accidentally clicked a link to a scientific paper, and can proceed to close the tab.

(Hopefully you can take that as the satire it is...)


I feel lucky that I had a class in college "Methods in Binary Analysis" that was centered around reading new research every week in binary analysis, presenting what you learned from the paper and then as a final project attempt to produce novel work from what had been read. Together with a professor who made a point to teach us how to effectively read a paper with many of the tips in this article being something that professor taught us.



Generally, my pattern goes like this:

1. Read summary/abstract.

2. Then skim intro.

3. Then read discussion/conclusion.

4. Then methodology.

5. Then read front to back.

6. Then read front to back again: but now we're being extra skeptical: we're questioning everything, looking for common bad practices, checking doubtful references to see whether they're true (the number of references that I follow where I don't agree with the author that the reference says what they claim is too damn high).

7. on loop: read from front, hit and study topic/part you don't understand, focus on and study that part, upon understanding continue read through to see whether understanding elucidates any other parts you didn't understand.

8. repeat step 7 loop until you understand everything.

Now running concurrently with most of these steps, there's two escape/break decisions to be made at almost every point:

a) do I understand paper sufficiently, if yes, then stop reading.

b) is paper actually worth continued reading: if no, then stop reading.

overall, I'd say the vast majority of papers aren't worth reading.

Some are easy and only require ~one read.

A very very very very small amount are both worth reading, and take a substantial amount of time. multiple reads, several weeks/ months to grok. I don't think I'm being hyperbolic when I say the number that justify that probably are less than 10 in your entire lifetime.


For me, it depends obviously on why I’m reading it. If it’s a matter of background, I read the abstract and look at the figures to see if they’re reasonable. Then I file it away.

If I need to design an experiment, I read the abstract and the methodology, then study the results to see if the methodology is any good. The references in the methodology often lead to other good papers. Here you find discrepancies in what they think they’re doing, what they tell you they’re doing, and what they’re actually doing. Reagents, for example, can be very specialized, and may not actually be available from the vendor they list. It’s a great way to “meet your neighbors”, asking them if they can help you find materials.

These days I mostly read critically, as I’m a practitioner and not a researcher. In this case I read the abstract and introduction. If the introduction isn’t 100% stuff I already know, I go out to the references. This makes critical reading time consuming and causes a fractal explosion of the number of papers in the collection. However, it is also how you find bullshit. Often someone in a review will assert that a reference supports a statement. Lazy or mendacious writers will copy that statement complete with the reference into their own paper. The actual reference may say nothing of the sort. Sometimes, there are multiple levels to this telephone game and each level loses nuance while adding error. When you see a lot of this, you know that there’s a bullshit problem in the domain.

If the author seems prone to BS or plagiarism, the results are somewhat questionable.

If the introduction is fine, I read the results and compare bits to the method. In my field, we look a lot at statistical power and effect sizes. Significance tests are often used incorrectly as a proxy and it pays to understand their limitations.

It’s also important to know how close the result is to the real world (for me). A paper showing reduced mortality, faster recovery, less pain, etc., is much better than one that shows a change in a biochemical marker or cellular protein and RNA content. A study in the species of interest is usually better than one in a model.

Ioannidis is right, most studies are flawed, so it’s best to avoid putting too much stock in any single one. A lot of good research has been done by questioning the accepted wisdom, which usually means one or two studies that may have flaws.


I ask myself two questions -

1. Is this paper of interest to me? 2. How will I assimilate the knowledge of this paper into my existing framework?

For (1), the title+abstract helps a lot. Followed by the conclusion, which can indicate whether there is something really to be absorbed.

For (2), I need some grasp of the "related work" - i.e. "how have others tried to solve this problem before this work?" So once I affirm interest, I might scan some of the references. If I'm familiar with them, I know enough to assimilate. Otherwise if the paper describes them adequately for the purpose, I settle for that with a note-to-self to dive into anything in detail.

A third kind of "always on" mode for me is "can I reuse any techniques or results in this work?" and I read more carefully to gain that.

For abstract and structure, my preferred structure (because I was trained like this) is .. in this specific order .. -

1. Introduction - broad sweep description of area to clue people in.

2. Related work - a survey of how people have solved some key thing.

3. Problem statement - What remains unsolved/unaddressed relative to the "related work".

4. Solution - How does this solve it. This includes methods.

5. Validation - How does the author claim that their solution actually solves the problem they set out to tackle. In some cases, this may involve user studies, in others it may be a mathematical result, etc.

This is both my communication framework as well as a knowledge assimilation framework.

edit: I find an abstract that's pretty much like one sentence or so for each of those items in that order easiest to assimilate. However, I admit this wasn't easy to develop initially and I struggled a lot with it before it became normal.


I work on the Semantic Scholar team at the Allen Institute for AI (allenai.org). We're working on solving the problems described in the post.

We're investing considerable effort into making it easier for researchers to find and consume scientific literature. Our team is made up of engineers and researchers who have felt the pain points firsthand and are very motivated to design and build solutions that fix them. Our software is free to use, and always will be.

Our search engine uses things like citation intent, citation influence, and figure and table extraction to make filtering through papers easier. We’re also currently prototyping an augmented reading experience that aims to embed contextual information directly into the reading experience so that it's easier to consume and comprehend academic literature. Give it a try: semanticscholar.org.


Half-OT: What resources can you recommend to search/browse master thesis' online?

I'm at my half yearly "I should write my thesis!" cycle and searching for a topic. My idea was reading existing papers and looking in the "future work" sections to find something interesting.


Sorry to sound cynical, but in my experience future work sections in papers tend to be rather self-serving. A means to big-up their own research, or explain away discrepancies.

Looking for new ideas? Start hanging around with students/staff from outside your own domain. No better way to get the ball rolling on new perspectives.


Your university or department library should have archive copies of all the theses they’ve accepted. In Iceland, they’re all collected at https://skemman.is/?locale=en as well; similar websites likely exist for other places.


>Most often, what I am trying to get out of the papers is issues of methodology, experimental design, and statistical analysis.

In lieu of consistently good science journalism, I try to take a peak at these things in the relevant papers as well. But this requires the additional knowledge of knowing the limits of certain 'methodologies, experimental designs, and statistical analyses' and what you can/can't extrapolate. Which, as somebody outside of formal research, I've only put together piecemeal through reading other criticisms.

Are there any good, comprehensive resources for the limits of different study methods/experimental designs?


For me during (CS) literature reviews it's generally:

1) Title: My research usually starts with a Google Scholar check and going through 4-5 pages and marking everything to Zotero based on title only. I only go through other databases once I exhausted the most interesting GS stuff.

2) Abstract -> Conclusions: If it fits it goes onto the to read stack with a self written 2 line summary of why I included it...I usually start sub-topic stacks at this point

3) Quick browse the methods section and figures/tables etc.

4) Complete read.

If I'm in the beginning of a literature review I also read the sources and marker everything that sounds interesting to add later.


[Shameless self plug] I created a newsletter two weeks ago in which I send out my favorite scientific paper of the week on each Friday to the subscribers. Maybe a good option to try out what you learned in this article & thread :D

https://simon-frey.com/weeklycspaper/

As I mostly dive into topics regarding distributed systems and backend development the focus will definitely be in that area.


How do you read so many papers? To properly read one I may need up to a day. If I read a few a week I wouldnt get any work done.


I did read a lot during working on my thesis. That helped to build up a backlog of nice ones :D


What papers have you sent out so far?


You can find all previous issues in the archive: https://simon-frey.com/weeklycspaper/archive/


Sequentially, from the beginning. Take your time.

Scientific articles are usually organised in a specific way that makes it easy to get a good understanding of the domain it is providing new knowledge.

What does it mean to read something seriously? We consume content differently according to our own needs. Even if you read something seriously there is always a chance that you didn't read it "seriously" enough.


The alternative three-pass technique on how to properly read research paper by Prof. Keshav is worth reading:

http://ccr.sigcomm.org/online/files/p83-keshavA.pdf


1. Read the protocol in the back first if it exists. 2. Look at all of the charts, especially the subtext/ caption. 3. Highlight the actual paper for benefits of the method vs comparative methods.

Try to find meta-analysis to compare multiple methods.


where can I find interesting papers? specially about computer science?


Wikipedia usually has academic references at the bottom of their CS articles, which are generally the foundational papers that introduced the idea in the first place.


Dipping my toe into off-topic for a bit:

Wikipedias "sources" can range from very accurate and on-topic to complete garbage. They certainly train you to filter out the good from the bad, as well as to consider whether the paper is even relevant to the statement it's being cited for.


Maybe 1/4th of the papers I read end up at https://arxiv.org/. I rarely just surf there, but when googling around looking up interesting cutting edge topics often a paper hosted on https://arxiv.org/ comes up on google search results.

Also, there are communities that share a lot of papers. Eg if you're interested in ML (which, if you didn't know, is a CS topic), https://old.reddit.com/r/MachineLearning/ posts a lot of papers.

edit: I just searched for a random CS paper for you. I haven't read it. Are you feeling lucky? https://arxiv.org/abs/1004.2772


There's a nice compilation on this repository https://github.com/papers-we-love/papers-we-love


I'm building https://42papers.com to surfaces top trending papers in Machine Learning and Computer Science. [shameless plug]




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: