RE: CFF table processing for WOFF2?

I think that this is definitely something we need to investigate, and we need to do it soon (as in "now"). The WOFF2 spec is still a working draft so it is not unreasonable to expect it to be changed (sometimes dramatically, as was the case with e.g. layout feature support in CSS Fonts), and the changes like this one won't really put anything in jeopardy - the existing WOFF2 fonts will work fine while the spec and CTS evolve, and the implementations will eventually be updated.

If there are significant gains to be realized due to CFF preprocessing we ought to consider it but, as Jonathan mentioned, the final decision will depend on the tradeoffs between potential benefits of reducing the compressed size vs. possible increase of the uncompressed font size.

Behdad, how long do you think it would take for you to get at least a rough estimates of compression gains and CFF size increase due to de-subroutinization?

Thank you,
Vlad


-----Original Message-----
From: Jonathan Kew [mailto:jfkthame@gmail.com] 
Sent: Thursday, April 23, 2015 2:16 AM
To: Behdad Esfahbod; WOFF Working Group
Cc: Ken Lunde; Jungshik Shin
Subject: Re: CFF table processing for WOFF2?

On 23/4/15 02:03, Behdad Esfahbod wrote:
> Hi,
>
> Is the working group open to adding processed CFF table to WOFF2 or is 
> it too late?  Maybe we can do that in a compatible way?
>
> There's already rumour on the net (which we have anecdotically
> confirmed) that CFF fonts compress better in WOFF2 if desubroutinized.
> It's unintuitive but makes some sense, if Brotli is great at capturing 
> redundancy, it should perform at least as well as the subroutinization.

This is an interesting possibility, but I do have a concern... unless the decoder can "re-subroutinize" the font (which seems like it would add substantial complexity to the decoder) this has the potential to significantly increase the in-memory footprint of the decoded font. For memory-constrained devices, that might be a poor tradeoff.

(I have no actual statistics to support or deny this. Both the potential savings of compressed size and the resulting change to the uncompressed size would be interesting to know...)

>
> I have only one transform on top of that in mind right now: drop the
> offsets to charstrings.  At reconstruction time, split charstring array
> on endchar.
>
> If there is interest I can prototype it and report savings.
>
> Cheers,
>
> behdad

Received on Thursday, 23 April 2015 13:49:56 UTC