Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Introducing OpenType Variable Fonts (medium.com/tiro)
100 points by 3ds on Sept 14, 2016 | hide | past | favorite | 27 comments


This is the return of Multiple Master fonts that I always wished would happen but gave up hope on.

Color me very, very impressed.


Metafont also had similar capabilities, if I recall correctly.


I came here to say the same thing. Multiple masters rise from their dormancy!


So how can I get the different variations of the font in CSS? (e.g.: I want condensed ultra light for instance)

Will it work with font-weight and font-stretch?

e.g.: Will this use the font variations instead of letting the browser do the interpolation?

   font-weight: 100;
   font-stretch: condensed;


Google's announcement [1] about OT font variations mentions that a CSS proposal is in the works:

> Together with other browser makers, we’re already working on a proposal to extend CSS fonts with variations. Once everyone agrees on the format, we’ll support it in Google Chrome.

[1] http://opensource.googleblog.com/2016/09/introducing-opentyp...


Is this like MetaFont?


No. Metafont (at least for format, not the interpreter) is more akin to TrueType or Postscript font definitions.

This is more akin to the old Adobe MultiMaster fonts, where the user installed one font that could then be dynamically tuned in various ways - weight, descenders, etc.


As the sibling post pointed out. METAFONT can be parameterized in more than just size of font. Computer Modern, in particular, has quite a few parameters that you can fiddle with. I don't think many people ever did.

The book Computer Modern has a fun section at the end where it goes through several poems showing different parameters for the font for each section.


That's what Metafont was supposed to be, too. See, for example, http://www.antonvasetenkov.com/metafont-demo .

But it seems that it ended up being used for a much more narrow purpose in the actual TeX stack that people use.


I took it as basically nobody else used METAFONT.

I hate the conservative, everything is old and has been done view; however, it is easy to look at some of the original large projects that people chose to ignore and be amazed at what they were already doing. It isn't in XML, but METAPOST/METAFONT are two really really cool projects.


> I took it as basically nobody else used METAFONT.

yeah, well, same true for Multiple Master fonts. It ended up that Adobe gave the font metrics to the TeX[0] community. The complexity of the fonts was obviously too much for the "graphic designer".

0 https://www.lcdf.org/type/mm-metrics-1.2.tar.gz


What are the odds that this time, things will take off? :)


Ah, thanks for the info. I had no idea.

My first year of college was the first year my school shifted most departments' document recommendations to MS Word from TeX, so I never really used it for authoring. Spending a day playing trying to make something useful with it has been on my someday-list for... a depressingly long time.


Very promising!

Is there anywhere we can play with creating variable fonts? Guessing it'll be awhile before it hits FontForge...


most of the type-design world is moving towards a ufo-centric workflow. ufos are basically simple to grok xml files defining bezier curves. Adobe's AFDKO (Adobe Font Development Kit for Opentype) supports ufo and generates from them full-fledged otfs. Another great resource is the more recent fonttools fork on github at https://github.com/behdad/fonttools .

I used to like fontforge, it's a fine application, but the move to ufo has made it easier to apply older svg-based tools to editing splines and saving out as ufos. here's a free application, it looks promising! https://github.com/trufont/trufont .

finally it's worth looking at https://github.com/LettError/MutatorMath which has lots of the tools for interpolating masters. There are actually loads more resources and, if you are short on time or attention, my guess is that the tightest workflow will be coming to Robofont (which hovered around 500 euro for a while).

Still, it's all very accessible to you even outside of fontforge and all the libraries for modifying ufo-style paths are freely available. take a look, it seems very fractured, but i think it's a nice distributed workflow all things considered given the final output.


So, will this add significant weight to webfonts, if they include multiple variants of the same font?


You did not read very far if you missed the bit where it said only deltas will be stored beyond the base. Though I'm sure some klutz will find a way to make obcenely large font files regardless, possibly by including the astral pnanes.


It's not clear to me how that's actually going to result in smaller file sizes. Are they quantizing these deltas? Are they planning to exploit certain consistencies in font design that will lead to the deltas being more compressible? A delta does not intrinsically require less information than an absolute value.


>It's not clear to me how that's actually going to result in smaller file sizes

In the obvious way that when A and B are similar enough (as font variations are) a delta between A and B is substantially smaller than bundling A AND B.

>A delta does not intrinsically require less information than an absolute value.

In the abstract, no, but in 99% of the cases where it's used in computing, e.g. it software updates, rsync, etc, it does, and that's exactly why it's used.

Why would font variations, which intuitively are more alike than different, would have diffs larger than the "absolute value"?


>In the obvious way that when A and B are similar enough (as font variations are) a delta between A and B is substantially smaller than bundling A AND B.

Are they similar? I would expect that between a bold glyph and a normal weight glyph, exactly zero points would be the same.

>In the abstract, no, but in 99% of the cases where it's used in computing, e.g. it software updates, rsync, etc, it does, and that's exactly why it's used.

You're comparing vector deltas (which are presumably floating point) to deltas between binaries, where you can take a line-by-line or byte-by-byte diff. This is apples and oranges.


How much of this could have been solved by simply using compression? That is, instead of making up a new delta language, just store the full files, and let compression tools do their job.

I thought this was the definitive reason why git tracks full files, and not diffs. Turns out, that is just a better way to do things in most cases.


Compression as in zlib works well on 1-dimensional data such as text. Fonts are vectors, described by geometric generator functions, i.e. : render "O" as: Circle with center at 50%x50%, line strength 1.2%.

That description is already a (excellent) compression: A bitmap for the 1000px x 1000px "O" for your poster would be 1MB.

Whereas before fonts only had rules to change with size changes, this standard defines weight as another dimension.

It's quite similar to how jpeg, mpeg, and mp3 are better compression methods for their respective domains than WinZip could ever be by incorporating knowledge about data being encoded.


You are still looking at it with a "per character" compression. I'd imagine full charset methods could do better.

Additionally, since building the fonts from the source isn't time consuming anymore, you could just focus on compressing the representations that say "circle with center blah". (Which, again, takes this back into METAFONT territory. Not a bad place to be, just bemusing.)


You're right in general, but you gave the example of jpeg, mpeg and mp3 which are all lossy compression. LZW/ZIP compressed images are lossless. I suspect that applying Zip compression to font files might not reduce file size enough to be worth doing for the reason you initially stated.


I guess by interpolating glyphs you also will loss information, so the comparison to jpeg makes more sense.


I suppose the strategy is that, if you know your expected data types and data structures pretty well, it's benefitial to apply delta encoding before general purpose compression, especially for simple RLE compressors.

In graphics, delta might be better, since the glyph is offset anyway on the canvas, so when drawing, the renderer can use the diff information as-is when the last point is known to reduce the number of operations to a simple addition with smaller length numbers as opposed to absolute positioning.

It might also keep the extracted memory footprint low since you might be able to get away with less bits per encoded change in a typed array data structure.

Still wonder if Mapbox ever did some size and speed benchmarking of general purpose compressed raw signed integers vs. their zig-zag and delta encoded vector tile geometries. https://github.com/mapbox/vector-tile-spec/issues/35#issue-1...


There was ~70% saving between using variable fonts and regular instances. Compression is already applied to web fonts, and you can compress variable fonts as well, so it is not like they are mutually exclusive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: