Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find Slint's font rendering odd and somewhat off-putting. It doesn't look sharp, especially on classic 96 DPI displays. Big text look inconsistently both fuzzy and aliased, and small text has inconsistent intensity. Lack of subpixel rendering and hinting? Rounded corners look sharp in comparison.


It seems you may have used the FemtoVG backend, which is the default if Qt is not installed and is written in pure Rust. We offer multiple rendering backends, and you may have better results with the Qt or Skia backend, which utilize native font rendering. Additionally, a recent change made improvements to address this issue in our FemtoVG backend: https://github.com/slint-ui/slint/pull/2618


I was primarily looking at demos on the homepage.


That's unfortunately an issue I have seen with several new GUI frameworks over the last years: with High-DPI displays getting more widespread, they tend to push subpixel rendering support down the road.


And understandably so. Du pixel rendering only works on some kinds of displays (many modern displays don't have 3 vertical subpixels per pixel), makes the text look kinda bad due to the wonky colors, really doesn't mesh well with any sort of transparency or even coloured text, it requires that you keep 3 images of each glyph in memory to account for the 3 different sub pixel offsets. It also deeply entangles detailed knowledge of the monitor into your font rendering. You need to re-render everything and update all text any time orientation changes or the window is dragged between monitors. And how the hell do you handle a window stretching across 2 different monitors?

That's not to dismiss subpixel rendering, there is arguably a legibility improvement to trading off color accuracy for horizontal resolution, but it's really no wonder that new frameworks don't bother and that old frameworks are losing the ability to do subpixel rendering. Apple, for example, ripped out subpixel rendering from their frameworks a long time ago (before Retina displays, IIRC).


> Apple, for example, ripped out subpixel rendering from their frameworks a long time ago (before Retina displays, IIRC).

...or at least when Retina displays were still rare and expensive, probably to encourage their adoption.


Probably because just using higher resolution is easier if it's available. Taking advantage of the physical structure of a pixel seems weird to me.


Actually, CRT monitors did "subpixel rendering" automatically when the beams hit the subpixels only partially. So using this technique on LCDs is just trying to emulate a feature that CRTs have built in.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: