Re: CFF table processing for WOFF2?

Behdad,

I suggest that you start by using the "Source" fonts that we make available on GitHub, specifically the Source Han Sans (though you can instead use Noto Sans CJK because their CFFs are identical in terms of the actual glyph data), Source Sans Pro, Source Serif Pro, and Source Code Pro families. This should be enough to provide preliminary results across a variety of glyph sets to determine whether further investigation, which would include a larger corpus of OpenType/CFF fonts, has any merit.

Regards...

-- Ken

> On Apr 24, 2015, at 1:03 AM, Behdad Esfahbod <behdad@google.com> wrote:
> 
> Thanks Ken and Vlad.
> 
> The first step would be to get a good corpus of CFF fonts.  After that, we have all the bits and pieces to try.  There are quite a few combinations (~10 to 20), but that's doable in a couple of weeks I would say, assuming I can get help from Jungshik and Rod.
> 
> So, Ken, Vlad, which one of you can contribute a CFF corpus for testing purposes for this project?
> 
> Cheers,
> 
> behdad
> 
> On Thu, Apr 23, 2015 at 6:47 AM, Levantovsky, Vladimir <Vladimir.Levantovsky@monotype.com> wrote:
> I think that this is definitely something we need to investigate, and we need to do it soon (as in "now"). The WOFF2 spec is still a working draft so it is not unreasonable to expect it to be changed (sometimes dramatically, as was the case with e.g. layout feature support in CSS Fonts), and the changes like this one won't really put anything in jeopardy - the existing WOFF2 fonts will work fine while the spec and CTS evolve, and the implementations will eventually be updated.
> 
> If there are significant gains to be realized due to CFF preprocessing we ought to consider it but, as Jonathan mentioned, the final decision will depend on the tradeoffs between potential benefits of reducing the compressed size vs. possible increase of the uncompressed font size.
> 
> Behdad, how long do you think it would take for you to get at least a rough estimates of compression gains and CFF size increase due to de-subroutinization?
> 
> Thank you,
> Vlad
> 
> 
> -----Original Message-----
> From: Jonathan Kew [mailto:jfkthame@gmail.com]
> Sent: Thursday, April 23, 2015 2:16 AM
> To: Behdad Esfahbod; WOFF Working Group
> Cc: Ken Lunde; Jungshik Shin
> Subject: Re: CFF table processing for WOFF2?
> 
> On 23/4/15 02:03, Behdad Esfahbod wrote:
> > Hi,
> >
> > Is the working group open to adding processed CFF table to WOFF2 or is
> > it too late?  Maybe we can do that in a compatible way?
> >
> > There's already rumour on the net (which we have anecdotically
> > confirmed) that CFF fonts compress better in WOFF2 if desubroutinized.
> > It's unintuitive but makes some sense, if Brotli is great at capturing
> > redundancy, it should perform at least as well as the subroutinization.
> 
> This is an interesting possibility, but I do have a concern... unless the decoder can "re-subroutinize" the font (which seems like it would add substantial complexity to the decoder) this has the potential to significantly increase the in-memory footprint of the decoded font. For memory-constrained devices, that might be a poor tradeoff.
> 
> (I have no actual statistics to support or deny this. Both the potential savings of compressed size and the resulting change to the uncompressed size would be interesting to know...)
> 
> >
> > I have only one transform on top of that in mind right now: drop the
> > offsets to charstrings.  At reconstruction time, split charstring array
> > on endchar.
> >
> > If there is interest I can prototype it and report savings.
> >
> > Cheers,
> >
> > behdad
> 
> 
> 

Received on Friday, 24 April 2015 15:41:11 UTC