Re: Hybrid approach to PFE

> On Aug 8, 2019, at 3:32 PM, Myles C. Maxfield <mmaxfield@apple.com> wrote:
> 
> 
> 
>> On Aug 8, 2019, at 2:37 PM, Garret Rieger <grieger@google.com <mailto:grieger@google.com>> wrote:
>> 
>> This is pretty much exactly how the Tachyfont (Google's existing solution) works (see: https://docs.google.com/document/d/1AQ2VwiVwF77H2h_nuDHR1A5hRyGlpyIQYpYodMtEz1w/edit <https://docs.google.com/document/d/1AQ2VwiVwF77H2h_nuDHR1A5hRyGlpyIQYpYodMtEz1w/edit>). The main reasons we're advocating for the patch/subset approach instead of a Tachyfont like approach is:
>> Eliminate the need to download the layout and codepoint coverage information separately.
>> Allow GPOS/GSUB and other tables to be incrementally transferred as well (a tachyfont like solution can only incrementally transfer glyph data).
>> Inability to allow for patched version upgrades (ie. new version of the font is released, with subset and patch you can just patch the clients existing font to get them onto a new version).
>> Similar to the HTTP range approach this it's hard to effectively apply compression (for example if glyph outlines contain redundant data those won't get compressed if the glyphs are transferred in separate requests).
>> Also I have a feeling that browser implementation of a client will be easier if we're hooking in font enrichments before shaping happens, since that's already supported with things like unicode range.
> We can judge the accuracy of all these statements except the last one by measuring. I think this approach is valuable to include in the performance analysis.
> 
>> That said, we should definitely plan to test a hybrid/tachyfont like approach alongside the other two proposals in the analysis.
>> 
>> 
>> On Thu, Aug 8, 2019 at 1:10 PM Levantovsky, Vladimir <Vladimir.Levantovsky@monotype.com <mailto:Vladimir.Levantovsky@monotype.com>> wrote:
>> Folks,
>> 
>>  
>> 
>> On a couple of different occasions I made remarks about a potentially different approach to progressive font enrichment – the initial idea [that was inspired by internal discussions with Kamal Mansour] prompted me to consider glyphID-based vs. codepoint-based approaches [1], and I recently alluded to a different twist of a glyphID-based approach [2] without giving it much substance in my brief remark. So, this email is an attempt to describe what I have in mind and open it up for discussion as a potential hybrid approach to PFE, drawing on some bits and pieces from both SmartServer patch-based solution and from Myles’ alternative proposal for a byte-range solution.
>> 
>>  
>> 
>> The hybrid approach is a twist of a smart server solution [that is much less demanding as far as smarts are concerned], which IMO offers additional benefits addressing certain concerns we recently discussed. In a nutshell, the solution would consist of implementing the following steps:
>> 
>> 1)      Every initial webfont request is satisfied with a font load where all mapping/metrics/layout/shaping tables are intact, and all glyph data is removed (could be as simple as zeroing out loca and glyf tables for TTF glyphs, where the subsequent Brotli compression would essentially eliminate this extra weight).
>> 
Oh, one more thing I forgot to mention - I like this approach because after this initial request, the browser knows all the code points and glyphs in the font. This means that, after the required glyphs for the initial page load have been downloaded, the browser can download the rest of the glyphs in the font before they’re actually needed, in priority order, depending on the browser’s observance of the user’s browsing habits. This could help with the flashy-text problem for dynamic content.
>> 2)      Once a browser receives a webfont, it would be able to do the following:
>> - determine the codepoint coverage for a given webfont (therefore, being able to immediately make decisions about fallback fonts, or kicking in another request for an additional font resource), and
>> - make shaping / layout decisions based on available data to determine glyph IDs that need to be fetched for rendering of a page content.
>> 
>> 3)      A browser sends a request for a set of glyphIDs that are needed for this particular page content. (Since all shaping decision are made by the browser taking into account CSS feature settings and relevant script- and language-specific font features – a set of glyphIDs is all that is needed to communicate back to the server, consider it a combined instance of multiple byte-range requests with a caveat that the actual byte ranges will be determined on a server side.)
>> 
>> 4)      Font server responds with a patch that includes the requested glyphIDs data to update the initial font load, thus creating a complete functional subset suitable for this particular page.
>> 
> Depending on the patching, this approach may or may not be stateful. I don’t think this approach has to be stateful, because the browser already has all of the font except the outlines from step 1. The server replying with a set of outlines doesn’t need to know which outlines the browser already has.
> 
> I like this approach because it works with range requests and it works with smart servers. The server could tell the browser about its smart-ness during the response from step 1, either by adding an extra header, by modifying the font file format, or maybe some other mechanism. The browser could then either issue range requests, or RPCs to the server, depending on how smart the server is. That way, the solution works with regular standalone Apache servers and with fancy Google Fonts application servers.
> 
> (Assuming, of course, that the smart server approach performs better than the range request approach. If they have similar performance, or if the range request approach somehow performs better, we should just pick that one.)
>> 5)      Subsequent content updates would require steps 2) – 4) be repeated to obtain a new patch.
>> 
>>  
>> 
>> The benefits:
>> 
>> -          Much simplified server side processing that 
>> - eliminates the need to make complex subsetting decisions
>> - arguably, much faster server-side processing and response time (creating a subset for known glyph IDs as a breeze compared to codepoints-based calculation of glyph closure)
>> - eliminates the need to send [potentially significant amount of] redundant data
>> 
>> -          Enabling browsers to 
>> - make font fallback decisions early on, with the very first font load
>> - using readily available shaping engine to determine glyph IDs and layout, and
>> - eliminating the implementation complexity associated with determining / optimizing byte-range requests (as described in Myles’ proposal)
>> 
>> -          Resolving (to an extent) certain privacy concerns – glyphID-based solution doesn’t leak the content of a page to the same extent that codepoint-based solution would – one has to make certain steps to reverse-trace glyph IDs to determine their semantical meaning.
>> 
>>  
>> 
>> Thought?
>> 
>> Thank you,
>> 
>> Vlad
>> 
>>  
>> 
>> [1] http://lists.w3.org/Archives/Public/public-webfonts-wg/2019Jul/0018.html <http://lists.w3.org/Archives/Public/public-webfonts-wg/2019Jul/0018.html>
>> [2] http://lists.w3.org/Archives/Public/public-webfonts-wg/2019Aug/0020.html <http://lists.w3.org/Archives/Public/public-webfonts-wg/2019Aug/0020.html>

Received on Thursday, 8 August 2019 23:07:53 UTC