Re: Experimental data on priorities

Mikkel, the 'why' is complicated and varied I suspect.

In Google's case, I know we like to isolate services like authentication
from static content from ads from large content(ie: YouTube) for a variety
of reasons.  I suspect we currently end up with as many as 2x more
connections than we really need, and we can likely fix a few of them, but I
anticipate we'll end up with >2 connections for the foreseeable future.

Why TaoBoa.com and TMall.com(both Alexa top 10) were using >20 H2
connections is quite confusing to me, or why Amazon is using 15 H2
connections, so I'd be curious if anyone knows.

On Tue, Jul 16, 2019 at 9:00 AM Patrick Meenan <patmeenan@gmail.com> wrote:

> Lucas can speak to the current state of deployment at Cloudflare but the
> scheme we deployed mirrors the current proposal well enough that the
> results should apply. It's a little different because the prioritization
> isn't originating from the client but it is being applied as if it was.
>
> Cloudflare has ample evidence that "it works" at least in place of
> HTTP/2's scheme over TCP and compares well to a well-implemented HTTP/2
> client (when serving web content to browsers anyway). Where the really big
> gains came into play were in overriding the client-requested prioritization
> from Safari and Edge and applying something that looks more like a mix of
> Chrome and Firefox (sequential critical assets, interleaved images). In
> that case it worked much better than HTTP/2 but it was because of the
> specific client HTTP/2 priority schemes and the bulk of the gains came from
> applying better sequencing server-side (though the ability to make
> server-side decisions was a huge benefit of the scheme).  It also works
> well enough that the new scheme is applied independent of the client so
> Chrome and Firefox also don't regress.
>

Thanks, this makes complete sense to me.

>
> Given that you can represent the proposed scheme in it's entirety with the
> HTTP/2 schemes, I'm not sure you'll ever see an application performance
> gain if you are using the same client and applying the exact same scheme to
> both modes.  That said, the code complexity may prevent some features from
> being implemented in the tree that could be added fairly easily to the new
> scheme. The one that comes to mind for Chrome in particular is to enable
> interleaving of image data instead of delivering them sequentially. With
> the new scheme it's pretty much just carrying a bit through the stack to
> indicate that the given fetch should be interleaved and it can be mixed in
> to the serialized priorities trivially. Doing the same with the full HTTP/2
> tree would be considerably more complex.
>

Agreed, we're starting to write the code to compare gQUIC's use of SPDY
priorities with Chrome's usage of the H2 tree with your proposal(in this
case, basically a variant of SPDY priorities), and we think that should be
relatively straightforward, but anything more complex would be a large
amount of work.


> On Mon, Jul 15, 2019 at 10:29 PM Ian Swett <ianswett@google.com> wrote:
>
>> When I presented slides on HTTP/3 priorities at the QUIC interim, one
>> consistent point was that the HTTP/3 working group wanted some experimental
>> evidence that an alternate scheme worked in practice.  gQUIC has always
>> used SPDY style priorities, FWIW, but we have no comparative data at
>> the moment.
>>
>> I can imagine a few ways to evaluate a priority scheme, but I'd like to
>> know whether I have the correct ones(and relative importance).
>> - Application performance
>> - Code size/complexity
>> - Bytes on the wire
>> - Computational complexity/potential for attacks
>> - Reduction/change in edge cases(particularly for HTTP/3 without HoL
>> blocking)
>> - New capabilities (ie: easier for LBs/AFEs to contribute?)
>>
>> *Experimentation Options*
>> The H2 stack at Google supports FIFO, LIFO and H2 priorities.  However,
>> it’s not currently using LOWAT and previous experiments have yielded no
>> measurable change in performance, so H2 does not seem like a good
>> experiment platform.
>>
>> For gQUIC, we already have an implementation of H2 priorities that we’re
>> not using.  We can wire up an experiment to start using them and compare to
>> SPDY style priorities, but given Chrome's internal priority scheme with 5
>> levels and translation to a linked list in H2 priorities, I believe that
>> would only provide information on code size/complexity and bytes on the
>> wire.  Application performance would only change if one of the two schemes
>> had an implementation bug.
>>
>> Expanding Chrome’s usage of priorities is possible, but it’s a longer
>> term project and I don't know whether they'll change.
>>
>> *Other data*
>> Most thinking about priorities is based on the idea that a page is loaded
>> over a single connection, but in fact, that’s extraordinarily rare as I
>> presented at the interim(except Wikipedia).   Would it be useful to have
>> more data on this from the client and/or server perspective?
>>
>> We'd be happy to work with someone else on gathering data as well,
>> assuming the WG would find the data we're gathering more valuable than what
>> Chrome/Google can provide alone.
>>
>> Thanks, Ian
>>
>

Received on Wednesday, 17 July 2019 23:52:03 UTC