Re: Redundant and unnecessary data (was): Transforming hmtx table

Vlad wrote:

 > I think we need to clearly distinguish two cases:
> 1) taking a font file as an input into WOFF2 compressor squeezing as much redundancy out of it and, on the other end, having a decompressor produce a fully functional and 100% equivalent (although not binary matching) font file to use, and
> 2) taking a general purpose font file and transforming it into a different, stripped down version of the font file where certain info that was deemed unnecessary for webfont use case had been stripped (thus producing a font file that I would consider to be an input font file for WOFF2).

> I believe that Adam's proposal describes option 2) preprocessing step, where a font file would be optimized for web use but remains for all intent and purposes a generic input font file as far as WOFF2 is concerned. I would argue that 2) is outside of WOFF2 scope since any font vendor can pre-process and optimize their fonts for web use as part of the font production process (we certainly do it at Monotype). Yet, this is not for the WOFF2 encoder to decide what portions of the font data can be lost and never need to be recovered - while the WOFF2 process is not going to produce a binary match we do strive to preserve every bit of font functionality presented to WOFF2 as input.

I entirely agree that the second case, which is what Adam was proposing, 
is out of scope of the WOFF2 spec. My question was whether it is out of 
scope of the Webfonts WG if the charter were extended beyond WOFF2.

I also agree that the two cases need to be clearly distinguished. But I 
am not sure that they are or, that there might not be grey areas between 
the kind of optimisation that might be done to prepare a font to be made 
into a webfont and the kind of optimisation that might be done while 
making into a webfont. The cached device metrics tables would seem to me 
to be in this grey area, in that -- in the environments in which 
webfonts are deployed -- the stripping of those tables would not affect 
the functional equivalence of the decompressed font file.

It is obviously within our remit to make decisions about that grey area, 
to push things in it one way or the other. But having lived with 
non-standard glyph processing and line layout behaviours for twenty 
years, I can't help wondering what happens to the things that get pushed 
into the 'higher level protocol' category. :)

J.

Received on Friday, 24 April 2015 20:27:47 UTC