Re: CFF table processing for WOFF2?

On Wed, Apr 22, 2015 at 11:15 PM, Jonathan Kew <jfkthame@gmail.com> wrote:

> On 23/4/15 02:03, Behdad Esfahbod wrote:
>
>> Hi,
>>
>> Is the working group open to adding processed CFF table to WOFF2 or is
>> it too late?  Maybe we can do that in a compatible way?
>>
>> There's already rumour on the net (which we have anecdotically
>> confirmed) that CFF fonts compress better in WOFF2 if desubroutinized.
>> It's unintuitive but makes some sense, if Brotli is great at capturing
>> redundancy, it should perform at least as well as the subroutinization.
>>
>
> This is an interesting possibility, but I do have a concern... unless the
> decoder can "re-subroutinize" the font (which seems like it would add
> substantial complexity to the decoder) this has the potential to
> significantly increase the in-memory footprint of the decoded font. For
> memory-constrained devices, that might be a poor tradeoff.
>

That's definitely something to consider.  From our experiments, subroutines
save something in the ballpark of 15% to 20%.  It's far less than what I
expected when I saw the numbers.


> (I have no actual statistics to support or deny this. Both the potential
> savings of compressed size and the resulting change to the uncompressed
> size would be interesting to know...)


I'll experiment with this some time.

Another saving point, which doesn't need any format change, is to remove
the encoding vector, since it's unused in OpenType fonts (cmap is used
instead).

Will keep the WG posted as I experiment.

behdad

I have only one transform on top of that in mind right now: drop the
>> offsets to charstrings.  At reconstruction time, split charstring array
>> on endchar.
>>
>> If there is interest I can prototype it and report savings.
>>
>> Cheers,
>>
>> behdad
>>
>
>

Received on Thursday, 23 April 2015 06:36:46 UTC