Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Best Font for Online Reading: No Single Answer (nngroup.com)
229 points by JW_00000 on April 25, 2022 | hide | past | favorite | 139 comments


Big warning: their summary is "Among high-legibility fonts, a study found 35% difference in reading speeds between the best and the worst."

This is completely wrong and comes from an abuse of statistics. See the original research at https://dl.acm.org/doi/10.1145/3502222#d1e6428

An understandable explanation: imagine having 5 dice. You roll each die 4 times, then compare the highest sum to the lowest sum. Then you report that the highest die rolls 35% higher than the lowest. This is what the authors did, with each die being a font. But the experiment does not actually show evidence of any difference. If you rolled 500 dice, this method could claim that the highest dice are 200% higher than the lowest, even though all dice are still equal.

The original authors seem aware of this shortcoming, but did it anyway: "we are somewhat stretching the applicability of a Cohen’s d analysis for this data". This is likely because they did not know of a better method. But it is wrong to be pushing this analysis. The main author is from industry, so perhaps he was unaware that this effect can be corrected for, or that this type of misleading claim is malpractice. But someone in the chain of publishing - the journal editors, the reviewers, the large author list, or Jakob Nielsen who is promoting this - should have caught this. It is their main result!

In the absence of legitimate statistics, the article's circumstances point to a failure to detect measurable differences between fonts. There are two ways fonts may be better than each other:

1. across the population, so that one font is better for everyone

2. personalized, so that different fonts are better for different people

The first case should be easily detectable, and the second case should see some correlation between preference and speed because of a familiarity bias (https://dl.acm.org/doi/10.1145/3502222#d1e7351). These are the Bayesian expectations I walked in with. Neither of these appear supported by the article, although I have only skimmed it.

To be clear, the experiment does not give evidence that fonts perform equally either. It looks more likely that the experiment design failed.


Personally, I think that you could deduce that the conclusions were likely invalid by simply looking at the ordering of the "best fonts" list.

They're is simply no metric by which any reasonable person could arrange that set of fonts in that order.

Serif vs sans? Random. Condensed vs spaced out? Random. Heavy vs light? Random.

Obviously there can be subtle interactions that we haven't understood yet, but there is simply no hypothesis presented as to what features could be important. A study with "statistical significant" (perhaps) results and no hypothesis ought to at least be replicated before we even discuss it.


well it doesn't seem completely off the wall- to my eye the top seven look pleasant to read, with the possible exception of oswald, where the bottom three- avenir next, avant garde, and open sans- look a tad obnoxious

curious because open sans looks so normal, but yet so subtly bad ..


The fact that Helvetica and Arial are far apart, yet completely indistinguishable by the average non-font-nerd is another indication to me that this ordering is fairly random.


Reading the article, what valuable things can we learn? Imperfection exists in everything in the universe - even the ideas in our minds we sometimes imagine to be ideal and perfect, but they turn imperfect as soon as we write them down (this cursed keyboard!). Imperfection does not destroy all value, or we live caves and aren't communicating using an imperfect alphabet and language, over imperfect signals, using imperfect power, etc etc. (in fact, we would be dead in the caves). Reading and learning from imperfection is the defintion of 'reading and learning', because there's nothing else to read.

If I listened to HN comments, very little research would have value, very little information would be worth reading. The top comment is almost always of this nature; it's depressing to me that it still happens; we all should be very familiar by now with social media and these patterns, yet we keep following them.

> I have only skimmed it [the OP].

Maybe that should be at the top of the comment. Imagine an OP which presented a detailed analysis and then, at the bottom, said 'I only skimmed the thing I analyzed' - imagine what the top comment would say.


I object strongly to your comment. I find every part of it to be absurd.

> Reading and learning from imperfection is the defintion of 'reading and learning', because there's nothing else to read.

> If I listened to HN comments, very little research would have value, very little information would be worth reading. The top comment is almost always of this nature; it's depressing to me that it still happens; we all should be very familiar by now with social media and these patterns, yet we keep following them.

Just as "laymen blanket dismissing productive scientists" is a tired and sad trend, so too is this - because now there is a delicious and ironic reversal. I am the productive scientist here, whose day job involves extracting the useful content from scientific articles by analyzing their methods and results. When I read, it is my pride to be able to determine when the imperfections of an article destroy its content, or if these imperfections can be worked around and to what degree. Like other academics who can do the same, I believe this is a skill that requires training and intelligence, and its outcome is to make subjects understandable that would otherwise be a maze of bad results. People like me are confident at playing the Many Labs replication guessing game. I read useful articles every day, and the original article is not one of those; it falls squarely in the "awful" bucket. I am not the uninformed layman making pithy snarks which have no specific relevance to the situation at hand - criticisms with general applicability, copied from elsewhere.

You are. You are the layman complaining about what the actual scientists do and how they advance the field. Even your comment complaining about this trend is reflected adroitly upon yourself, in a very ironic manner. Notice how my original comment uses deep field-specific knowledge of fonts, science, and statistics to make my point, while you only use broad strokes about social media trends, as you lack such knowledge. In fact, this remarkable and joyous irony about a person complaining about uninformed criticism, while making exactly that uninformed criticism himself, was what led me to make this comment, as otherwise I would not have bothered to respond.

> > I have only skimmed it [the OP].

> Maybe that should be at the top of the comment. Imagine an OP which presented a detailed analysis and then, at the bottom, said 'I only skimmed the thing I analyzed' - imagine what the top comment would say.

My comment was based on the parts I read, and it still stands correctly - unless you have something meaningful to say? For example, if you believe yourself to also possess the skill to analyze articles and extract their value, it's open to you to do a complementary analysis. So far, you have not even addressed any of the points, only pushed some dismissals which are themselves easily dismissed: a programmer can contribute to the Linux kernel without having read every line.

In fact, I looked up the original article's author, and his affiliation is not only with Adobe. He is also a PhD candidate at Brown University, under advisor Jeff Huang. This makes my opinion far more negative, but not of the PhD candidate, who only has my sympathy. Rather, his advisor bears the blame here, as the advisor has the responsibility to control ethics and give guidance on the unfamiliar. PhD students depend on their advisors to know the way forward, especially on statistics and scientific culture, which they cannot navigate the conventions of on their own. Jeff's behavior here evokes complex emotions in me, related to the ethics of science and responsibility, leading to this conclusion: Brown University should no longer allow Jeff Huang to advise students, and Jeff Huang's articles should be either ignored, or looked over by data thugs if their results need to be relied on. Or, if Jeff's incursion into experimental science is him trying something unfamiliar, then perhaps he too is a victim of his own ignorance and bears no ethical blame or responsibility, but he should stay away until he learns better.


Can we just stick to the technical facts? I think that would be a more productive conversation.

Specifically, your argument rests on the the invalidity of "Among high-legibility fonts, a study found 35% difference in reading speeds between the best and the worst."

And the reasoning you give, is that because it's comparable to this metaphor you wrote, "An understandable explanation: imagine having 5 dice. You roll each die 4 times, then compare the highest sum to the lowest sum. Then you report that the highest die rolls 35% higher than the lowest. This is what the authors did, with each die being a font. But the experiment does not actually show evidence of any difference. If you rolled 500 dice, this method could claim that the highest dice are 200% higher than the lowest, even though all dice are still equal."

It seems to me then, that the metaphor doesn't match the method in the original article. A more accurate metaphor would be (and I've taken the liberty to revise your text where my change is in brackets, if you don't mind):

An understandable explanation: imagine having 5 dice. You roll each die 4 times, then compare the highest sum to the lowest sum. Then you report that the highest die rolls 35% higher than the lowest. [Then you repeat whole previous procedure 352 times (352 is the number of participants reported). On average over all participants, the highest die is 35% higher than the lowest with some modest variation. So you claim that there is a 35% variation between the highest die and the lowest die for individuals.]

Seems much more reasonable doesn't it?


As everything you've written is in good faith and aspiring to contribute, I too am willing to respond in this cooperative manner to you.

Your changed metaphor is fine. However, the origin of the method to produce 35% was already faulty, so the increased accuracy by repeating this experiment 352 times does not help, since it is repeating a faulty experiment.

To make this intuitive, imagine having 1000 dice, each rolled only once. Now the highest roll will be 6 and the lowest roll will be 1, because you've rolled so many dice that you are sure to hit every number. Thus, the highest is 500% higher than the lowest. If we repeat this 352 times, the average over all participants will still be 500%. However, it is not an indication that we have biased dice.


> I object strongly to your comment. I find every part of it to be absurd.

It's like the top comment, negating "every part" of what someone says. There is more to life - that is, there is life, and people doing good, productive things despite our flaws that seem so absurd.


I assume the very indignant lady so angry at Brown and Jeff fully understands how mixed methods model work, and has found a flaw in the peer reviewed design. A flaw that Nielson missed, as well as the editorial process of this longstanding journal. Maybe, in addition to Brown firing a PhD advisor, we should get Nielson our of NN, revoke the charter of Transactions on Computer-Human Interaction, and disband the ACM.

Or, possibly, we might trust peer review, and not angry internet randos?

Sacrilege, I know...


> If I listened to HN comments, very little research would have value

That's the truth. Very little research does have value.


Why would you think that? As long as the research is well designed, performed free from the influence of those who would sacrifice the truth for their own personal gain, is carefully reviewed before being published, is published and made available to the public regardless of the results, and those results are later verified by multiple independent researchers. how could it be anything but valuable! Oh wait... never mind.


No HN comments meet those standards, yet the GP find them valuable. Very little information in the world meets those standards.


I'm personally glad that internet comments (and most information in the world) isn't held up to the standards of scientific/academic research, but man I do wish most actual research met that high bar.


> If I listened to HN comments, very little research would have value, very little information would be worth reading. The top comment is almost always of this nature; it's depressing to me that it still happens; we all should be very familiar by now with social media and these patterns, yet we keep following them.

If you listened to HN comments, you would have a more accurate model of reality.

It is true that very little research has value. It is true that very little information is worth reading.

The hard part is figuring out which is that little bit.


Why do you trust HN comments over research? The researchers test their ideas in reality, in this case with over 300 people, and carefully analyze the results. The same researcher could post a HN comment in 30 seconds, with no basis, and for some reason some people would believe that more.


> Why do you trust HN comments over research?

I don't, that's a misinterpretation of the meaning I intended.

I wrote

> > It is true that very little research has value. It is true that very little information is worth reading.

That's just the nature of research and information.

Most research isn't going to lead to anything, and that's ok because the stuff that does lead to something more than makes up for it.

I'm pretty sure that most of the stuff I read and most of the research that is done has mostly entertainment value. What do you remember that made a difference in your life that you read five years ago. I'm willing to bet it's a small percentage of the stuff you read.

> The researchers test their ideas in reality, in this case with over 300 people, and carefully analyze the results.

Did they though(carefully analyze their results)? They didn't share their dataset. They didn't calculate one variance, or one p-value.

> The same researcher could post a HN comment in 30 seconds, with no basis, and for some reason some people would believe that more.

People tend to believe the last argument they hear(unless they think it is obvious bullshit). Perhaps post a counter argument.

Also, the parent of this context, had a rather strong basis.


One of the worst things to happen to web usability was the shift away from the user deciding how pages look. In an ideal world the Web ecosystem would have grown up such that nearly all pages respect user configured font and color choices. Then, it wouldn't much matter which font the page creator decided was best, since the user's browser would automatically override it.


This died mostly because the defaults were so terrible.


This died because convention > configuration

Most users do not want to configure fonts or anything really to use the web.


I think it is deeper than personal want. i have a feeling that the more people communicate to each other about something, the more they value their personal experience to be relatable. Every customization makes your experience less relatable.


Wow, I never thought about that in this category. Thanks for this insight.


This could have ended up as a few simplified settings, as presented to most users... as we have with light and dark modes. Colorblind themes (for multiple kinds of colorblindness, say), high-readability themes for people who need the large-print editions of books, that kind of thing. Doesn't have to mean making everyone pick exact point values for every type of H tag, or whatever. A few presets in a dropdown could be really helpful—again, just look how everyone goes nuts over something as simple as having a "dark mode" setting.


I know a few friends who have 'custom' fonts on their phones. I assume they are from preset options, just nonstandard. It is a surprisingly jarring experience when they hand me their phone to do something or read something. I get a similar response because I have my text as small as possible on my phone.

There are certainly people who like to tweak things a little, and then there are the tinkerers who want to use custom fonts or other changes. I wonder if the customization slowed down as computers for work got more standardized? Or maybe interchangeable is a better word.


> There are certainly people who like to tweak things a little, and then there are the tinkerers who want to use custom fonts or other changes. I wonder if the customization slowed down as computers for work got more standardized?

As the expectation became "We should design our products so that the most technologically inept person can use it" customization in general got sidelined and a huge number of users are either unaware of what customization options still exist or lack the confidence or capability to take advantage of it. The result is that for most people's stuff, everything looks standardized and a small number of folks (the ones who have always loved customizing and personalizing everything) are the exception.

Perhaps those of us who like to customize were always doomed to be the exception rather than the rule since sticking with defaults is the path of least resistance and as long as something mostly works many users won't bother to change anything at all. It's also harder for the pro-customization crowd because eventually developers are going to consider something along the lines of: "The majority of our users use whatever defaults we impose on them, and having things more standardized can make it easier for support anyway, so why are we spending time/resources maintaining all these customization options that only a few people use and which could potentially make a clueless user confused and angry." which can lead to further restricting/limiting customization options.


> I know a few friends who have 'custom' fonts on their phones.

It can get much worse. I have a friend who uses a "cute", handwritten-looking font on her phone. She is Chinese. When she sends me screenshots, I can barely read them. I have enough trouble with standard characters.

QQ took that one step further, and let you customize the font in which your messages appeared to other people. I don't think QQ is as popular as it was, though.


> I know a few friends who have 'custom' fonts on their phones.

The weird thing is, whenever I see a custom font (or at least when it’s noticeable) on a phone, it’s Comic Sans. It’s usually people who are not super tech savy, but still they went out of their way to customize their phone to show them all interface elements in Comic Sans.


Comic Sans is extremely legible on small screens for people with poor eyesight. It's a perfectly acceptable use of the font.


Interesting, I did not know that. But I don’t think it’s relevant, none of them have bad eyesight, and they also make heavy use of backgrounds that make things harder to read (though, maybe CS counteracts the issue).


I think people who aren't sufficiently tuned into computer geek and/or Web and/or font nerd culture are aware that they're supposed to think Comic Sans is bad, and instead they like it. I see it used all the time, in many contexts, by people who seem to just think it's fun.


I notice often when other people have worse eyesight then me, it wouldn't surprise me if you're unaware of how good your eyesight actually is and/or your friend believes their eyesight is at least above average (at least with glasses) when maybe it's not.


I think it also died because styling could so easily destroy legibility. If text display areas were specifically designed with some fancy script like Helvetica in mind and I preferred Courier New the UI (if poorly designed) would often just break whole-sale.


This died when the web became mostly a marketing tool for businesses. It's a shame, but inevitable.


My favorite part of HN is when someone posts something I webbed in 1995, and then a ton of people complain that their browser defaults make the content look ugly.

But they're complaining about me, not their browser.


Every time i need to read some text longer than a few lines, I run Firefox Reader mode. It is great.


But all browsers can already do this. Most people don’t enable that setting, however. Could either be because they prefer to see the site as it’s intended to look, or because they’re unaware of the setting.


That's because it's buried (and has been increasingly so over the decades), it'll mess up some sites and won't do anything on a bunch of others because no-one accounts for user-defined colors/margins/fonts anymore when designing for the web, and it has more options than most people want/need to deal with.

A simple browser-provided theme dropdown containing a few nice options, directly in the main browser chrome, from a major browser (perhaps Firefox, back when it still counted as "major") could have changed things completely—see the pressure on sites to support "dark mode" now that that's a simple option to enable.

It'd be nearly impossible to change now, but 10-15 years ago, maybe it could have been changed, if any of the very small number of entities steering the direction of the web had tried.


Yeah the web was much more usable when you could change it with some simple and intuitive edits to .Xresources.


Yet theming with exactly two options (light/dark) is quite popular.

It was a UX problem, not a problem with the core concept.

Now, decades in to The Web, it's also a chicken/egg problem, because almost no sites are designed to behave OK under reasonable customization by the user, and almost no users customize their browsers' default styles, so why would sites change to accommodate that? Aside from the light/dark thing, of course, and you do see users pushing for support for that, and sites putting in effort to support it.

If Apple expands Dark Mode to include a half-dozen other options for a11y and such, I bet you'd see support for those become fairly common. It's just got to have a decent UI and the push has to come from a browser with a large enough user-base to encourage site operators to care. Once upon a time, Firefox could likely have done it, assuming they could manage not to screw it up. These days, Apple, Google, and MS are the only ones who could realistically try.

Actually, we have another version of this, now that I think about it: reader mode. People seem to really like it.


This is like that Malcolm Gladwell story about the best spaghetti sauce. There is no one size fits all - different people like different things.

https://www.ted.com/talks/malcolm_gladwell_choice_happiness_...


It's probably about the font just being well designed, having consistent kerning ans whatnot.

Same as with design guidelines. Every couple years something new comes along and is better than anything before.

We had an intern a few years ago who was still working on his masters and also made some money on the side with "web stuff". He wouldn't stop talking about material design, how it is the best ever and how every margin and padding is scientifically proven to be perfect to the pixel, the blue they picked is perfection, how rounded corners and 3D effects on buttons are fatiguing to the eye, and so on and so on. I guess it was the first time this fellow consciously witnessed the release of a web design framework. It was near impossible to convince him that using bootstrap is just as good, and while certain rules for ratios between paddings, margins and font sizes exist it's much more important your theme is consistent, and that you apply it consistently.


This is pretty much what happens to anyone who enthusiastically enters a new field. You can tell when someone both cares about typography and is lacking in experience when they start talking about how Comic Sans is the worst font ever made or Helvetica is the best font ever made. They're both excellent fonts when used as intended.


I find it amazing that people ever think that there might be a one-size-fits-all solution to a subjective human experience.

Spaghetti Sauce, Fonts.

Sure no one likes dog crap in their sauce, so there are obvious experiences no one likes, but to assume that there is one single one that everyone likes?

Sometimes we get so nearsighted that by reframing the question it seems obvious.


Making good spaghetti isn't random. There is a science to cooking and making meals that have broad appeal.

Good typography matters, and it's not a wholly subjective or interpretive experience where wingdings, comic sans, and Times New Roman all are equally likely to provide an efficient reading experience.

We aren't that different, after all.


Sure, but that doesn’t mean that you are likely to find one single spaghetti or one single font that is “the best” for more than a fraction of the population. Too much variation and not enough alignment on what constitutes “best”.


Almost all human beings enjoy opiates. You can dose almost anyone with them and the experience is pleasant. This is why they are used so effectively as analgesics. The subjective experience is almost universally pleasant.

MDMA is similar. Sugar is similar.


Who says there is one thing everyone likes best? (The OP says the opposite.)


We cant even agree on which water is best


Water? You mean like in the toilet? What for?


I like blue toilet water


Very interesting takeaways regarding different results among old and young:

> The takeaway is that, if your designers are younger than 35 years but many of your users are older than 35, then you can’t expect that the fonts that are the best for the designers will also be best for the users.

> the differences in reading speed between the different fonts weren’t very big for the young users. Sure, some fonts were better, but they weren’t much better. On the other hand, there were dramatic differences between the fastest font for older users (Garamond) and their slowest font (Open Sans). In other words, picking the wrong font penalizes older users more than young ones.


There was also the suggestion near the end to cut the number of words for older readers, which is just ridiculous.


I work in the world of dyslexia and assistive technology. I find it unfortunate that people (including folks who work on WCAG) emphasize using simple words and short sentences as the primary ways to accommodate dyslexic readers. These strategies are helpful in making text easier to understand, but they also undermine the nuance of communication. Before doing this, designers should think about how text is laid out so that it can be made maximally accessible in its original form.


> I find it unfortunate that people (including folks who work on WCAG) emphasize using simple words and short sentences as the primary ways to accommodate dyslexic readers. These strategies are helpful in making text easier to understand, but they also undermine the nuance of communication.

I think that's a great point. Simplifying meaning for dyslexic and older readers seems condescending and just awful, depriving them of access to the world and degrading them.

However, rather than simplify meaning, we can simplify its representation, the text: Better writing produces simpler text and that takes effort. Mark Twain (might have) said, "I didn't have time to write a short letter, so I wrote a long one instead." George Orwell's rules echo many experts: [1]

* Never use a Metaphor, simile or other figure of speech which you are used to seeing in print.

* Never use a long word where a short one will do

* If it is possible to cut out a word, always cut it out

* Never use the Passive where you can use the active.

* Never use a foreign phrase, a scientific word or a Jargon word if you can think of an everyday English equivalent.

* Break any of these rules sooner than say anything outright barbarous

Again, thought and effort are required. I've always thought it good writing but hadn't considered accessibility. Write better, more succinctly, and help readers in more ways than one - and yourself too.

[1] Politics and the English Language (1946) [though I didn't see the original quote]

Better writing - which requires more labor -


I do have font preferences but my greater preference is background color. I've found that hex color #FEF0DF as a background tires me less. For reference, it's roughly the Financial Times background.

Calibre allowing me to set the background color to my preference is a godsend for a heavy reader. HN's background is pretty good also. Anyone know the hex?


#f6f6ef -- right in the HTML


I think the parent was saying "I like a color somewhat similar to FT's background," not "I wish I could have FT's background color but I can't figure out how to get it."


GGP explicitly asked for the HN background color at the end of his comment.


Firefox has a handy color picker called Eyedropper in its Browser Tools menu. I'm not sure about other browsers.


> I do have font preferences but my greater preference is background color.

Same here, I always want #000 on #FFF, so that I can take full advantage of my display’s contrast ratio.


On paper or e-ink, this approach makes sense.

On a backlit screen, maximum contrast ratio also means maximum difference in the amount of light emitted, which can be unpleasant.


HN's background is too light. Try #EDD1B0 instead (R 237 G 209 B 176).


Came here to say "duhhh..." but the article is actually interesting, especially to those early web usability followers of Jacob Nielsen (https://www.nngroup.com/articles/author/jakob-nielsen/)


They tested the same typeface twice.

Noto Sans is an extended family of fonts based on Open Sans that covers more languages. The design of their glyphs is exactly the same (being Open Sans). Yet the results place Open Sans last at 254 WPM, and Noto Sans 8th at 272 WPM…


Doing a visual comparison (tab flipping between https://fonts.google.com/noto/specimen/Noto+Sans and https://fonts.google.com/specimen/Open+Sans?query=open+sans&... , aligning the sample text exactly), I could notice some differences though:

- The 'g' character was changed, they are quite different

- For the regular (400) weight, Noto Sans is slightly bolder but also more compact

But you're definitely right, most character shapes look directly inherited from one to the other.

I wonder if the weight & letter spacing change can explain the reading speed differences, or if measurement error / random jitter is actually what explains the different average speed in the study between the two.


Their difference is mentioned in the paper (and they're saying it's the other way around: Open Sans derived from Noto Sans):

"Open Sans is a general use font derived from Noto Sans. Typographers designed Noto Sans for the small screens on Android devices. "

And they mention Noto Sans having hinting for that:

"Also, five fonts have hinting (i.e., are engineered for reading on screens with low resolutions): Arial, Helvetica, Calibri, Noto Sans, and Times."


> "Open Sans is a general use font derived from Noto Sans. Typographers designed Noto Sans for the small screens on Android devices."

Unless I'm missing something, that's not quite right. Open Sans, designed by Steve Matteson, was released in 2011. Noto Sans, commissioned by Google to cover a wide array of languages, was first released in 2013. Similarly there is a Noto Serif, based on Droid Serif. Open Sans itself goes back to Droid Sans, an earlier design by Matteson. Anyhow, whichever way it is, the point is that both typefaces share the same design.

But you're right, the typefaces are not exactly the same. Here's a visual comparison: http://www.identifont.com/differences?first=Open%20Sans&seco... As idlewan pointed out above, the lowercase "g" is different, and so is the uppercase "I." There is also a slight weight difference. As you mention, the authors of the paper also note that Noto Sans is hinted, whereas Open Sans presumably is not. So I am wrong in my comment above to call it "the same" typeface. The changes are very subtle though, so it's interesting to see the difference in performance.

Also worth noting is that they tested Arial and Helvetica (http://www.identifont.com/differences?first=Arial&second=Hel...), both hinted and almost identical in weight. Helvetica ranked 4th (283 WPM) and Arial 11th (270 WPM).


It's almost as though people are individuals. The write up (OP) often reads like an apology but I think that is wrong too. For example, some fonts favour certain eye conditions.

So me at 51ish with shortsightedness in both eyes and some pronounced astigmatism in one of them and also general "degradation" will have some different ideas about a decent font than someone with 20/20 vision or my old mum who had rather better than that (plus longsightedness)

At the moment, I need a new pair of specs which I have held off from due to the virus but my reading speed is mostly down to text size rather than typeface.

Incidentally, a font/fount is an exemplar of a typeface but who's quibbling?


Isn't Georgia a very popular font? Interesting to see it isn't mentioned anywhere. It's the preferred font choice for me, but I know a lot of editorials also use it (or a variation of it).


The font choices are a little odd.

There are a number I'm not familiar with or have maybe heard of in passing. But probably Georgia and Verdana in particular seem to be missing.

The delta between Helvetica and Arial also seems a little strange. Yes, there are a few type-nerd differences but AFAIK (feel free to correct) they're basically the same font.


> The delta between Helvetica and Arial also seems a little strange.

That, along with the basically random ordering of the fonts in the final results (i.e. there is absolutely no metric you could order them by to even roughly match the order produced) is what triggers my BS alarm about this whole thing.


the answer is verdana, but they didnt test it.

I dont think this study is well done. Fonts are learned over time, and people's performance will improve, but differently. There's definitely certain fonts that are optimal.


I agree. I couldn’t help wondering while reading the article if older users are faster with serifed fonts and younger users are faster with sans-serif fonts, however they never touched on it.


> A second interesting age-related finding from the new study is that different fonts performed differently for young and old readers. The authors set their dividing line between young and old at 35 years, which is a lower number than I usually employ, but possibly quite realistic given the age-related performance deterioration they measured.

> 3 fonts were actually better for older users than for younger users: Garamond, Montserrat, and Poynter Gothic. The remaining 13 fonts were better for younger users than for older users, which is to be expected, given that younger users generally performed better in the study.

They kinda did touch on it. From what I can see, letter sizing and kerning looks to make more of a difference.


Is there a better study you can point to? From what I remember looking into this topic the research has always provided ambiguous results with lots of context-dependence.


I think the answer was Verdana years ago but not so much today with higher dpi screens. Verdana was designed for CRTs around 800x600, with wide letter forms and kerning, to get ample spacing between glyph elements. It's the champion of readability in that format, but on a modern 1920+ display with subpixel rendering rather than an aperture grill, Verdana feels too big and clunky.


Tahoma is almost identical to Verdana, just narrower and with tighter spacing. Makes it less clunky.


> I dont think this study is well done. Fonts are learned over time, and people's performance will improve, but differently. There's definitely certain fonts that are optimal.

At least they tested their theories. Do you have a 'well done' study that supports the claims above?


Seconding Verdana, especially on low-DPI screens.


I really like the readability of avionics screens and their recommendations. I like their simple, mostly monospaced fonts and the uncluttered display panes. I'd love if more monitoring applications used that.

edit: https://www.faa.gov/documentlibrary/media/advisory_circular/... has some interesting guidelines.


The lack of antialiasing in many avionics systems is highly detrimental for reading, in my opinion. The use of monospace makes numeric differences more salient, which I assume is a safety advantage. In any case, nobody is reading many words off avionics.


No. They are more related to the "glanceable" displays mentioned in the article.


You'll like https://b612-font.com then.


I actually dislike this font - it reminds me of Microsoft console fonts. I love its name, however.


The article doesn’t address font rendering engines, screen DPI and panel type, and font size, which are important factors for which fonts work better than others. But they are right in the conclusion that user customization is needed.


The answer to this question depends on the resolution of the display, the weight of the font, and a person's idiosyncrasies--e.g. nearsightedness.

I use a different font for online reading, coding and reading ebook on high-resolution phone display.

All 3 are different, and are different sizes and weights.. the purpose is different.

You know what really helps?

Don't have white as a background. I die now without the Dark Reader browser extension.


>Don't have white as a background.

I would say that more important is to check your cool design in a normal screen and not on your super expensive designer monitor. Too often I see light gray text on light background, probably is only readable on the designer screen.


> Don't have white as a background.

That reduces contrast, which makes text harder to read for me. Today’s prevalent IPS displays have a rather low contrast ratio (typically only 1:1000). Please don’t lower it further by styling text as dark gray on light gray. If black on white looks too bright, you probably have set your screen brightness too high.


It is indeed curious that you say “it depends” and then immediately make a strong, exceedingly opinionated recommendation.

White (whether #fff or something close) is a very good background for most people on most devices. And as a sweeping generalisation, dark-on-light works better than light-on-dark.


If you do CSS you'd know the best fonts for 16px isn't the best fonts for 18px, and definitely not the best fonts for H1 H2 etc.

The Verdana font that HN and reddit uses are pretty good for reading in small text.


> People read 11% slower for every 20 years they age.

How can this be anywhere near a useful or accurate statistic? Surely the rates depend on the age, and there’s likely space for significant improvement in someone’s 20s.

I can read in the neighborhood of 750wpm with good comprehension while verbalizing. I’m pretty sure I never was able to read at 900wpm no matter how many years you go back. I don’t think there’s been any meaningful drop at all in the past 20 years. Further, in the next 20 years I wouldn’t be surprised if my reading speed took a huge hit, say 30%-50% (or more.) Even if that averaged out to 10% per year, it would be a misleading statistic.

That’s also ignoring comprehension and reading level. At 30, I could focus throughout tearing through a Homeric epic and retain a huge amount about the text, including themes, symbolism, metaphors of note, etc. I couldn’t do anything like that at 20, even if my raw reading speed was faster.


The characters we use to interact with our computers were mostly designed to be hand-written and minimize the amount of movement your hand has to make going from one letter to the next. I theorize that they don't translate all that well to existing display technologies. Not so much because of the shape of the fonts, but because format isn't conducive to sharing or receiving information as quickly as our machines and our wetware could allow.

If you think about this a little more, programming is actually a way to overcome the limitations of spoken/written languages to an extent, since the machine can parse the text faster than you, and it can read other forms of data that are even more efficient. In my view a monitor displaying human-readable text is similar to a legacy ABI that's kept around because of the technical momentum and mindshare it has, not because it's particularly good.


> The characters we use to interact with our computers were mostly designed to be hand-written and minimize the amount of movement your hand has to make going from one letter to the next.

That is an interesting theory. Do you remember a source for it?

Taking that idea to an extreme (and not suggesting the theory requires such an extreme), imagine optimizing characters only for that specification. I imagine much more prominence for simple, single, mostly horizontal lines, moving left-right and ending rightward.

  / \ - ~ _ , . ` ' ^ n u v w m 2 z ...
The punctuation performs much better. You could imagine 'dot on bottom', 'dot in middle', 'dot on top', 'two dots on bottom' ... 'line on bottom' ... 'tilde on bottom' ... 'slash top to bottom', 'slash middle to bottom' ... etc.


That's one way to do it but it helps to think in terms of cursive, or people's sloppy handwriting where the letters are often connected by at least one stroke so that the tip of the pen doesn't need to be lifted from the paper.

There are also multiple ways to write a given character. For example, some people put a line through the number 7 for easier readability, and so it isn't confused with 1. Something similar can be done with 0 so it isn't mistaken for a 6 or capital O.

The real interesting characters are ones like number 4, where it can be square or triangular. However, to write the square version quickly you have to lift the pen. To write the triangular version you only need to put the pen to paper once.

These are just some common examples I see in use day to day. I'm sure there are many more optimizations being employed, especially in languages other than English where the characters can be much more complex.


Designing a set of glyphs for an alphabet is a multidimensional optimization problem:

- economical for writing - minimize changes in direction, minimizing movement, etc.

- clarity between glyphs (0 vs. O, + vs ×)

- robust to noise, 3rd graders and doctors

- insensitive to medium (pencil/paper, stick/clay, brush/papyrus)

- legible at small sizes, low contrast, noise in the display (coffee stains, inkjet cartridge low, etc.)

- context - if two glyphs rarely co-occur, it's okay if they look similar (0 and O, I and 1)


I wonder how the modern English alphabet was developed. I assume it evolved, but perhaps it is partly or largely the product a few influential design decisions.

Also, while I agree those are important measures of performance, I wonder how much the development of the alphabet was influenced by them.

> economical for writing

Another interesting thought experiment would be designing an alphabet for typing, that ignored writing optimization.


>since the machine can parse the text faster than you

You had me up until there; the machine doesn't know jack about text. It knows arrays and sequences of numbers according to the rules we've defined them by, for it.

Yuo cna reda tihs raedliy btu teh copmtuer cna't. Your brain is trained by billions of years of evolution for symbolic parsing and pattern pairing, and language is just one flex of that muscle.

Where computers thrive is where we've done the hard work to break down the syllabic system that is inherent to our biology into mathematical abstractions that can be computed by addition. Computers are great at solving problems we've already done, and repeating the steps, nothing more.

Our machines are beautiful, well designed levers. But they don't move anything, they leverage our movement.


>Yuo cna reda tihs raedliy btu teh copmtuer cna't.

Why can't it? Isn't that basically what current AI research is doing? Using massively parallel systems to make quick inferences based on existing data sets?

>the machine doesn't know jack about text. It knows arrays and sequences of numbers according to the rules we've defined them by, for it.

If you want to be pedantic and define a computer as the hardware only, sure. The operating system (which contains tools that can in fact parse text) is an essential component in the vast majority of computers in existence. So when I'm discussing computers as a complete, usable unit, then yes, they parse text.

>Our machines are beautiful, well designed levers. But they don't move anything, they leverage our movement.

Well said.


>Yuo cna reda tihs raedliy btu teh copmtuer cna't.

It would be interesting to see what GPT3 would do with this statement.

Edit; just tried it. GPT-3 chatbot understood it instantly.


Cutting edge language processing algorithms trained by professionals in a herding manner for years ought to produce something that solves what is basically a parlor trick of language. It might even be a cheat-around on encoded rules but I don't have the paper.

Still I'm willing to go out on an increasingly thin limb and say GPT3 is still playing a really big game of mad libs, and its 'understanding' of the language is more akin to the extremely rudimentary mathematical factoring of sentence structure and grammatical rules. That is, it could be trained to recycle on what others have done for poetry, and spit out something that sounds like what we've read that we call poetry, but it isn't putting poetry out, because it doesn't know what that means...not in like a conscientious way, but a literal, it has no "idea" which to express through a medium, it's just transcribing according to rules, not from say, first principles which then are interpreted through rules.

Again, too much to demand of a really fast adding machine anyway.


Your view of computers seems overly simplistic. The "adding machine" is just one component in a larger series of abstractions that enable functionality the adding machine is normally too inflexible to perform.

This is because math is a man-made thinking tool, similar to language. It works because we all agree to follow the rules of it, not because those rules are set in stone. The universe operates on its own time, by its own rules. Our constructs are also bound by those rules. You are alive because your entirety is worth more than the sum of your parts, and computers operate on a similar principle. When you shove a wooden board under a rock to pry it loose from the soil, you're leveraging the same forces that allowed you to even wake up that morning, and so does your computer when you jiggle the mouse to wake it.

In my opinion the miracle of consciousness is not the material it gets scribbled onto, but instead the fact it can exist at all.


The Lexend group was formed out of research on second grade students (US) [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3777945/] that lead them to a font design [https://design.google/library/lexend-readability/] that apparently showed improved reading proficiency. A 'world-script' version of the font family is also published as Readex Pro [https://github.com/ThomasJockin/readexpro]


I'm not sure the fonts are the sole consideration, anyway. Let's talk about diagrams.

The original article (https://doi.org/10.1145/3502222) uses the click-a-thumbnail scheme for figures. I'm sure this is the decision of the journal, but it is really quite annoying, since you lose context.

The actual figures are in the authors' control, however, and they are not very clear. Take Figure 2, for a start, which combines an overly small font and low-contrast colours, reducing legibility for no good reason that I can discern.

The PDF is a lot easier to read, as is common with online journals.

I'm not sure how many people will bother trying to read such a long document online, as it is formatted. But sometimes I think the point of such studies is simply to get cited, and this is an ideal paper from that point of view: the title states something that is likely obvious to most people, so this might get cited by quite a few people who don't bother to read the details.


For me the best font for reading online is on paper, outside, in the winter, Doves Type. Or SF Pro Rounded from Apple.

If I can’t read “online” by printing it on paper, I read on a well-calibrated display that casts the same light to all viewing angles, and 120Hz variable-refresh-rate. With macOS’ font rendering.

I’m sure this is a perfectly unique combination of “best”.


What's your workflow for getting digital text into print?

Additionally, what display do you use and how do you ensure it's calibrated?


Ha!, it hadn’t occurred to me to call it a workflow – but it is, I guess. Sometimes it’s just hitting Print in the browser. Sometimes I turn on Safari’s Reader view and print that. For some reason, things printed on the Booklet setting tend to get read sooner. (“Booklet” is a macOS printing feature. It prints two pages on each sheets and orders them so the pile of paper can be keel-stapled into a booklet. Some printer drivers have it too, and Acrobat Reader as well I believe.)

I try to have fonts set to those two ones I mentioned. Through overrides or preferences. On occasion I’ve converted to Markdown and applied a stylesheet. This seems to work. I read a bit more. Get around to it easier. It might be the ritual, it might be the result. Probably both.

I tend to be quite picky about monitors, though I wish I wasn’t. (It’s a handicap, being this fussy!) The models I end up buying tend to be pretty well calibrated from the factory. LG IPS monitors are good. Right now I have an LG CX 55” OLED TV as my programming display. It is very good. It’s fussy to set up; macOS doesn’t support HDMI 2.1 so right now it isn’t possible to get full, crisp 4:4:4 color and 120Hz at the same time. I go with 120Hz 4:2:0; Some color/background combos are noticeably blurry, but the fluidity of 120Hz is so, so nice. I don’t agree with it or want it to be that way, but it is :)

I want a color calibrator. Haven’t got one yet. I have borrowed one on occasion from a photographer friend. I find that it makes a difference. There isn’t a glaringly obvious difference, but everything feels tighter and more relaxed at the same time. Grays are more neutral, colors are more vividly themselves. It’s easier on the brain? Kind of like putting fresh tires on a car. Or working in a quieter place. It adds up.

Last time I looked at color calibration devices, the info over at the DisplayCAL site seemed to be very good: https://displaycal.net/


… and I have an HP m254dw color laser printer that can print on both sides of the paper. It's been worth it! I read WAY more after buying it.

And, heh, I feel obliged to mention the long-arm stapler too. I have a Zenith 502 long-arm stapler. It can easily keel-staple folded A4 or Letter paper. And it's a really good stapler. Staples everything, first try, also really thick bundles of paper. It's funny how big a difference a silly thing like a particular staper makes. They're made for a reason! Here's the Zenith 502 CUCITRICI DA TAVOLO: https://www.zenith.it/prodotti/cucitrici-da-tavolo/zenith-50...

(Cucitrici – stapler – means "seamstress-er". Haha.)


Thanks for all this info.

It took me a couple minutes but I found the booklet print mode under Print > Layout > Two-Sided > Booklet. It makes more sense after I printed a few pages out and folded them. What a delightful little feature— much better size for taking on the go.

Right now I work with a 32" 4K monitor with the UI scaled and siting about 6 feet away. You've inspired me to get a bigger monitor, mount it, and sit across the room from it. I can also commiserate with being fussy. Sometimes I don't know if it's easier to satisfy or to push back against my own inclinations.

Your colored laser printer and stapler sound amazing as well.

When you say keel-stapled, is that to staple across what would be the spine of a book(let)?


Yes!, that’s what keel-stapled means to me. (It might be an idiom from my mother tongue?)

Thank you for your understanding!! Sincerely. I honestly didn’t know I had so much to say about the process of reading or how much I had thought about it. It’s kind of after the fact that I realize it – my fiancee was very ill during 2020; Undiagnosed for uncomfortably long. Reading scientific papers kept us afloat. Her physically and me mentally. So it can all be described as a silly and charming little hobby part by part, and it’s also a centerpiece of a significant period in life. The little color laser and its whirring sounds and puffs of ozone, and the eccentric little stapler and myself the funny man downstairs folding and straightening PDF files.


You have a way with composition— potentially with spice added via idioms from your mother tongue. Do you write and publish?


Never really thought about it before reading this article but I think I prefer different fonts for different content: strong preference for clean sans-serif mono fonts for coding, something like Arial for general purpose content and something old fashioned with serifs for literature/artsy content.


I’ve been using Georgia for ages, since it exists almost everywhere and has distinct serifs that (in my case at least) help distinguish between some letter forms. But I’ve never found a good sans-serif equivalent that looked good on a page with it.


Much more important than the font, to me, is font size and line width. Plus, for anything of article length, ability to take and export highlights. That’s why I import all longer form text into Voice Dream Reader. Browser just isn’t the place for reading imo.


100%.

Line length, line-height and text-size in relation to my distance to the screen are key to comfortable reading.


I wish this article included some discussion of serifs on "I" and generally of distinguishing "I" and "l". It's a shame that Clear Sans and Verdana were not included, because they do have a serif "I".


I like that the article about best reading font research itself has Arial as its font.


Personally speaking, I think it's kind of silly that this article neither mentioned screen resolution nor the font rendering differences inherent to various operating systems.


In this study, participants read 14% faster in their fastest font (314 WPM, on average) compared to their most preferred font (275 WPM, on average).

Faster because "I hate this font and I want to stop looking at text in it as soon as I can"?

I remember reading how all-uppercase letters can make people read faster, but also make them miss misspellings more frequently because it's more annoying to read all-uppercase text.


I could not find the raw data anywhere. Is it really missing from the original paper (https://dl.acm.org/doi/10.1145/3502222)?

I would have really liked to analyse the data myself, but this way I cannot take this research serious.


Their study excluded Courier New, how rude! But I love reading text with an absolute lack of kerning & multi-character sigils. (I mean this non-sarcastically just to be clear)

I've always found that reading wide text comes at very little legibility cost, personally, so I try and consume data in mono-space whenever possible.


depends what you're reading. fixed-width fonts make some inputs easier, like code, and data. So if the word "read" means solely flow text, I can buy an argument the pretty fonts have value which goes to character recognition, kerning, ligatures and the role of caps and serif.

If the word "read" includes "do a rapid scan of a column of data to confidence check it makes sense" or "find the longest number (==longest string) in a list of unsorted numbers" then fixed width will score higher than almost any other quality in my opinion.

My reading of the history of fonts suggests some people thought italianate styled writing was a mistake. florid, and hard to read. to my eyes gothic is sometimes impenetrably hard to read.


Last author is the same guy who did the "Glanceable Fonts" stuff at MIT https://www.nngroup.com/articles/glanceable-fonts/


This is great supporting evidence in favor of reintroducing user-selectable stylesheets, along with better UI to allow more users to take advantage of the feature.

Remember when Opera Browser let you switch to user-style mode with one click or keypress?


> The test stimuli were at an approximate 8th grade reading level, which matches our recommendation for web content targeted at a broad consumer audience.

that's depressing to read. but i suppose different people have different strengths.


>that's depressing to read

congratulations! Depressing reading is second year of college level!


that's an even more depressing thought.


tsk, nobody appreciates Sylvia Plath anymore.


as usual, it is up to personal preference, but I've used Unifont, specifically Unifont CSUR, a bitmap font, tho in ttf form, as my sole font across the system, terminal, vim and web browser (firefox) which allows me to enforce a singular font in webpages by disabling font downloads, rotation and scaling. for me, it appears that having text align to a grid and not have any annoying ligatures or oversized characters makes it more readable.


Good place to ask this: what about the "standard" academic paper font(s) from the LaTeX tradition. I hate it but it gives an immediate gravitas to anything.


The default font on a Kindle, Bookerly, is pretty great, honestly. Of the ones in the article, I think Lato suits me best.


I find it hard to believe that Open Sans has the worst readability of all the tested fonts. I'm pretty sure the humanist Open Sans is faster to read than the geometric Avant Garde.

With Garamond winning, what about Palatino and other old-style typefaces? There's a million generic sans-serifs in the study and not a single Didone? Mind, I wouldn't bet on it, but it'd be a more interesting comparison...


I was surprised by that too, along with the low standing of Avenir. Those are the most beautiful sans serifs.

P.S. I don't think screen resolution is high enough yet to do justice to a Didone.


No single answer? Sure there is.

It's called Linux Libertine.

There you go.


HN font is good


It is Verdana for those who wants to know the font in HN.


1. xkcd-script.ttf

2. Opera Lyrics Smooth

3. Crete Round

All others should be banned. Especially creepy spidery fonts on some ebooks.


Did we need a study for this?

Next study: "Study for best colorscheme for coding: no single answer"..

Kidding aside, it makes sense. No best option, just options optimized for different things.


Do you just hate scientific research or something?


on the contrary, I think it's great. But I don't think it's beneficial for anyone when it's phrased as "looking for the best X", especially on things that are obviously not going to have a "best" thing, like fonts. Tradeoffs everywhere.

Don't you think a title like "A comparison of fonts for online reading" or something would be more fitting?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: