Speeding up Brotli Patch Generation w/ Custom Encoding

After our discussions at TPAC I've been thinking about how we could speed
up brotli patch generation by implementing custom brotli encoding logic
(thanks Sergey and Dominik for the idea). It turns out that internally
brotli uses the concept of a meta block
<https://datatracker.ietf.org/doc/html/rfc7932#section-2>, which can
optionally contain a self contained block of compressed data
<https://datatracker.ietf.org/doc/html/rfc7932#section-11.3>. This is great
news as that then allows a server to easily incorporate precompressed data
blocks into the generation of the brotli compressed stream.

I've started to document my thoughts on how we could actually apply this in
a server implementation here: Speeding up Incremental Transfer Patch
Generation
<https://docs.google.com/document/d/1MdtB_WPC2grAx3vFgLHA1-CqRzQtWj_cJYx40W2VQgI/edit?usp=sharing>.
The doc is still a work in progress, but the main ideas are there.

At a high level I believe we should be able to:

   - Cache precompressed immutable segments of the original font for use
   during initial response generation.
   - Cache precompressed blocks of glyph data to allow for fast compressed
   patch generation (for the first and subsequent requests) via concatenation.

Where run-time performance is a concern this should allow for significantly
reduced computational requirements at the cost of potentially larger
patches. The best part is that none of this would require any specification
changes, this can all be done within the existing brotli patch mechanism.
So this could eliminate the need for the addition of a third, new, patching
format as discussed at the last meeting.

As next steps I'm looking into building a simplistic prototype to
demonstrate the idea works and gather some initial data on performance and
patch sizes.

Received on Monday, 19 September 2022 22:07:07 UTC