A More Static Friendly version of Patch Subset

There’s been a recurring concern with the patch subset approach that it
requires a specialized server side to deploy (most recently came up in the tag
review issue <https://github.com/w3ctag/design-reviews/issues/849>). This
is a major barrier to adoption and also makes it difficult to cache via cdn
(which is likely important to potential adopters) as previously raised by
Skef in #93 <https://github.com/w3c/IFT/issues/93>.

In an effort to try and address some of these concerns I’ve begun to
consider how we might modify the method to make it possible to integrate
the option for static hosting of patches. My approach to this is to try and
find a way to allow for static hosting when desired, while still leaving
the option to dynamically compute responses if an implementer believes it
makes sense.

Here is a rough idea I came up with:


   The initial URL for an incrementally loaded patch subset font points to
   a minimal subset of the font which supports a small core set of codepoints
   (for example a set of really high frequency CJK codepoints for a CJK font,
   or a really small set of core latin for a primarily latin font).

   The initial font file would contain a new table which would contain a
   map from font subset description
   <https://w3c.github.io/IFT/Overview.html#font-subset-definition> to a
   URL which hosts a patch that would upgrade the current font to also cover
   that subset (as per existing patch subset that would be a brotli shared
   dictionary patch).

   The newly patched subset would have an updated mapping which allows
   further extension.

This effectively forms a graph of augmentation which the client could
follow to reach whatever state of coverage that is desired. One of the nice
properties of this approach is that it then allows for a completely static
solution by preprocessing a font and producing the whole graph and
associated patches in advance which can be just dumped onto a CDN. However,
it’s also possible to implement this dynamically by having the table point
to URLs that are fulfilled by a dynamic implementation. The dynamic
approach is helpful in cases where the graph is prohibitively large to
prebuild. You could also have a blend of static and dynamic where popular
patches are hosted on cdn and the low usage long tail is satisfied

Another nice side effect is this removes the need to specify a
“PatchRequest” message construct in the specification. The URL space is now
fully under control of the implementer. They could use whatever message or
identification scheme that works for them. This would result in a
significantly simplified specification. Also removes the need to deal with
cors preflight requests that come with using custom headers.

There are some definite drawbacks to this approach:


   The approach I describe above can’t load patches in parallel so if a
   client wants two different patches that are listed in its current subset it
   needs to download one, apply it, and then download the second and apply it.
   This is likely not much of an issue if you’re augmenting in units of script
   subsets (pretty typical for non-cjk usages) where a user likely only needs
   one script at a time, but would be pretty problematic for CJK where you
   would almost always want more than one additional subset.

   For CJK cases any reasonably fine grained segmentation (eg. splitting
   into ~100 subsets like we currently do on the Google Fonts API) would lead
   to an unacceptably large graph for the purposes of static hosting.

Here’s some possible workarounds for these issues:


   For the parallel case we could add an optional feature which would
   instruct the client it’s allowed to merge identifiers from different
   subsets into a single URL. This would almost certainly need to be handled
   by a dynamic server side so it would need to be optional.

      Another similar option is to allow the font to declare it’s Ok to be
      given a serialized PatchRequest like message via a URL template. Allowing
      an implementation to opt-in to behaving exactly like the currently
      specified version of patch subset if there isn’t a good subset
available in
      the map.

      For fully static use cases you could instead pre-split a large CJK
      font into more manageable chunks that are selected by the browser via
      unicode-range (as long as you can find splits that don’t break layout
      rules). The smaller chunks could then have more acceptably sized
graphs and
      could be augmented in parallel. For example one might split a
CJK font into
      a high frequency portion and one or more low frequency portions.

      If the above isn’t possible due to too many inter codepoint layout
      rules preventing splitting, then something like range request or
IFTB could
      instead be used. By enabling a static version of patch subset it
allows us
      to have a single encoder which could decide which of the two
formats to use

   For the second problem, large graph sizes, the solutions are much the

      Where possible a font could be pre-split and the splits served via
      unicode range this will reduce the graph size within each split.

      Dynamic serving allows for more granularity and larger graphs while
      still allowing for popular patches to be served statically.

      Fully static cases will likely need to trade off granularity to keep
      the number of files manageable. This will of course give larger transfer
      sizes, but should still be better than the current state of the art
      (particularly by allowing subsets to span layout rules).

So, why have a static version of patch subset instead of just relying on
range request or binned IFT to provide the static hosting friendly option?


   Both range request and IFTB are limited to only partially transferring
   outline data. They cannot as currently proposed partially transfer layout
   and other data in the font.

   These non-outline tables can be very large (example
   so ideally we would likely to partially transfer those as well.

   Likewise variation data (other than per glyph deltas) can’t be
   incrementally transferred (ie. adding all of the deltas for a single axis

   They struggle with handling cases where there are complex substitution
   relationships between codepoints.

As a result these methods (IFTB, Range Request) are well suited for
CJK/Icon/Emoji font use cases where the predominant type of data is outline
data and there aren’t too many complicated relationships between codepoints.

A static friendly version of patch subset would provide a viable and
efficient incremental solution for a broader range of fonts (particularly
non-cjk) and a fully dynamic implementation should still be able to provide
similar performance levels to what we get with the existing message based
version of patch subset.

Note that this is still some very early thoughts that likely need
refinement and more exploration/validation before we could seriously
consider adopting in favour of the current patch subset proposal. However,
I thought I’d raise this in advance of the TPAC meeting as that would be a
great opportunity to discuss this further.

Finally, here's some very rough napkin math to show this idea is feasible
for completely static setups:

*Example 1: Non-CJK*

   - Let's say we have a non-cjk font and want to divide it into 10
   incremental subsets.
   - If we form a full static graph for that font that's 9! or ~360k static
   patches needed, kinda high but not completely unreasonable to store on a
   - However, with some additional tweaks we can bring that number down
      - There's likely very little need for a client to incrementally add
      all 10 subsets, so we can limit the depth of the graph to let's say 5
      levels. Now we only need 15k patches, if a client needs more
than 5 levels
      you can just patch them to the full font.
      - Alternatively if we can find a way to divide the font into two
      parts (eg. latin/cyrillic/greek and everything else), then we end up with
      two incremental fonts that each have 5 possible incremental subsets. This
      only needs 4! + 4! or 48 patches stored in advance.

*Example 2: CJK*

   - This will of course require some tweaking to make it reasonable. Let's
   start by saying we can subdivide the font into a high usage subset and 100
   low usage ones.
   - The high usage segment we break into 10 incremental subsets and use
   the maximum depth of 5 as above. So that takes 15k patches to support.
   - On the low usage stuff we could simply not support incremental loading
   at all, which doesn't impact performance all that much due to the low
   frequency of occurrence. Or support a small number of incremental subsets
   to keep the number of patches reasonable (eg. 4 per top level subset would
   require only ~700 additional patches).

Received on Wednesday, 6 September 2023 23:02:21 UTC