Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reading the article, what valuable things can we learn? Imperfection exists in everything in the universe - even the ideas in our minds we sometimes imagine to be ideal and perfect, but they turn imperfect as soon as we write them down (this cursed keyboard!). Imperfection does not destroy all value, or we live caves and aren't communicating using an imperfect alphabet and language, over imperfect signals, using imperfect power, etc etc. (in fact, we would be dead in the caves). Reading and learning from imperfection is the defintion of 'reading and learning', because there's nothing else to read.

If I listened to HN comments, very little research would have value, very little information would be worth reading. The top comment is almost always of this nature; it's depressing to me that it still happens; we all should be very familiar by now with social media and these patterns, yet we keep following them.

> I have only skimmed it [the OP].

Maybe that should be at the top of the comment. Imagine an OP which presented a detailed analysis and then, at the bottom, said 'I only skimmed the thing I analyzed' - imagine what the top comment would say.



I object strongly to your comment. I find every part of it to be absurd.

> Reading and learning from imperfection is the defintion of 'reading and learning', because there's nothing else to read.

> If I listened to HN comments, very little research would have value, very little information would be worth reading. The top comment is almost always of this nature; it's depressing to me that it still happens; we all should be very familiar by now with social media and these patterns, yet we keep following them.

Just as "laymen blanket dismissing productive scientists" is a tired and sad trend, so too is this - because now there is a delicious and ironic reversal. I am the productive scientist here, whose day job involves extracting the useful content from scientific articles by analyzing their methods and results. When I read, it is my pride to be able to determine when the imperfections of an article destroy its content, or if these imperfections can be worked around and to what degree. Like other academics who can do the same, I believe this is a skill that requires training and intelligence, and its outcome is to make subjects understandable that would otherwise be a maze of bad results. People like me are confident at playing the Many Labs replication guessing game. I read useful articles every day, and the original article is not one of those; it falls squarely in the "awful" bucket. I am not the uninformed layman making pithy snarks which have no specific relevance to the situation at hand - criticisms with general applicability, copied from elsewhere.

You are. You are the layman complaining about what the actual scientists do and how they advance the field. Even your comment complaining about this trend is reflected adroitly upon yourself, in a very ironic manner. Notice how my original comment uses deep field-specific knowledge of fonts, science, and statistics to make my point, while you only use broad strokes about social media trends, as you lack such knowledge. In fact, this remarkable and joyous irony about a person complaining about uninformed criticism, while making exactly that uninformed criticism himself, was what led me to make this comment, as otherwise I would not have bothered to respond.

> > I have only skimmed it [the OP].

> Maybe that should be at the top of the comment. Imagine an OP which presented a detailed analysis and then, at the bottom, said 'I only skimmed the thing I analyzed' - imagine what the top comment would say.

My comment was based on the parts I read, and it still stands correctly - unless you have something meaningful to say? For example, if you believe yourself to also possess the skill to analyze articles and extract their value, it's open to you to do a complementary analysis. So far, you have not even addressed any of the points, only pushed some dismissals which are themselves easily dismissed: a programmer can contribute to the Linux kernel without having read every line.

In fact, I looked up the original article's author, and his affiliation is not only with Adobe. He is also a PhD candidate at Brown University, under advisor Jeff Huang. This makes my opinion far more negative, but not of the PhD candidate, who only has my sympathy. Rather, his advisor bears the blame here, as the advisor has the responsibility to control ethics and give guidance on the unfamiliar. PhD students depend on their advisors to know the way forward, especially on statistics and scientific culture, which they cannot navigate the conventions of on their own. Jeff's behavior here evokes complex emotions in me, related to the ethics of science and responsibility, leading to this conclusion: Brown University should no longer allow Jeff Huang to advise students, and Jeff Huang's articles should be either ignored, or looked over by data thugs if their results need to be relied on. Or, if Jeff's incursion into experimental science is him trying something unfamiliar, then perhaps he too is a victim of his own ignorance and bears no ethical blame or responsibility, but he should stay away until he learns better.


Can we just stick to the technical facts? I think that would be a more productive conversation.

Specifically, your argument rests on the the invalidity of "Among high-legibility fonts, a study found 35% difference in reading speeds between the best and the worst."

And the reasoning you give, is that because it's comparable to this metaphor you wrote, "An understandable explanation: imagine having 5 dice. You roll each die 4 times, then compare the highest sum to the lowest sum. Then you report that the highest die rolls 35% higher than the lowest. This is what the authors did, with each die being a font. But the experiment does not actually show evidence of any difference. If you rolled 500 dice, this method could claim that the highest dice are 200% higher than the lowest, even though all dice are still equal."

It seems to me then, that the metaphor doesn't match the method in the original article. A more accurate metaphor would be (and I've taken the liberty to revise your text where my change is in brackets, if you don't mind):

An understandable explanation: imagine having 5 dice. You roll each die 4 times, then compare the highest sum to the lowest sum. Then you report that the highest die rolls 35% higher than the lowest. [Then you repeat whole previous procedure 352 times (352 is the number of participants reported). On average over all participants, the highest die is 35% higher than the lowest with some modest variation. So you claim that there is a 35% variation between the highest die and the lowest die for individuals.]

Seems much more reasonable doesn't it?


As everything you've written is in good faith and aspiring to contribute, I too am willing to respond in this cooperative manner to you.

Your changed metaphor is fine. However, the origin of the method to produce 35% was already faulty, so the increased accuracy by repeating this experiment 352 times does not help, since it is repeating a faulty experiment.

To make this intuitive, imagine having 1000 dice, each rolled only once. Now the highest roll will be 6 and the lowest roll will be 1, because you've rolled so many dice that you are sure to hit every number. Thus, the highest is 500% higher than the lowest. If we repeat this 352 times, the average over all participants will still be 500%. However, it is not an indication that we have biased dice.


> I object strongly to your comment. I find every part of it to be absurd.

It's like the top comment, negating "every part" of what someone says. There is more to life - that is, there is life, and people doing good, productive things despite our flaws that seem so absurd.


I assume the very indignant lady so angry at Brown and Jeff fully understands how mixed methods model work, and has found a flaw in the peer reviewed design. A flaw that Nielson missed, as well as the editorial process of this longstanding journal. Maybe, in addition to Brown firing a PhD advisor, we should get Nielson our of NN, revoke the charter of Transactions on Computer-Human Interaction, and disband the ACM.

Or, possibly, we might trust peer review, and not angry internet randos?

Sacrilege, I know...


> If I listened to HN comments, very little research would have value

That's the truth. Very little research does have value.


Why would you think that? As long as the research is well designed, performed free from the influence of those who would sacrifice the truth for their own personal gain, is carefully reviewed before being published, is published and made available to the public regardless of the results, and those results are later verified by multiple independent researchers. how could it be anything but valuable! Oh wait... never mind.


No HN comments meet those standards, yet the GP find them valuable. Very little information in the world meets those standards.


I'm personally glad that internet comments (and most information in the world) isn't held up to the standards of scientific/academic research, but man I do wish most actual research met that high bar.


> If I listened to HN comments, very little research would have value, very little information would be worth reading. The top comment is almost always of this nature; it's depressing to me that it still happens; we all should be very familiar by now with social media and these patterns, yet we keep following them.

If you listened to HN comments, you would have a more accurate model of reality.

It is true that very little research has value. It is true that very little information is worth reading.

The hard part is figuring out which is that little bit.


Why do you trust HN comments over research? The researchers test their ideas in reality, in this case with over 300 people, and carefully analyze the results. The same researcher could post a HN comment in 30 seconds, with no basis, and for some reason some people would believe that more.


> Why do you trust HN comments over research?

I don't, that's a misinterpretation of the meaning I intended.

I wrote

> > It is true that very little research has value. It is true that very little information is worth reading.

That's just the nature of research and information.

Most research isn't going to lead to anything, and that's ok because the stuff that does lead to something more than makes up for it.

I'm pretty sure that most of the stuff I read and most of the research that is done has mostly entertainment value. What do you remember that made a difference in your life that you read five years ago. I'm willing to bet it's a small percentage of the stuff you read.

> The researchers test their ideas in reality, in this case with over 300 people, and carefully analyze the results.

Did they though(carefully analyze their results)? They didn't share their dataset. They didn't calculate one variance, or one p-value.

> The same researcher could post a HN comment in 30 seconds, with no basis, and for some reason some people would believe that more.

People tend to believe the last argument they hear(unless they think it is obvious bullshit). Perhaps post a counter argument.

Also, the parent of this context, had a rather strong basis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: