Re: Experiences with HTTP/2 server push

>
>
> If you can come up with a complete delivery plan on the server, that is
> definitely ideal. There is some academic work that tries to do this (see
> Klotski: https://www.usenix.org/node/189011). I also liked your
> "Coordinated Image Loading" article as an example of why server-defined
> priorities can be a good idea. I am curious though -- how do you deal with
> cases where the page is heavily dynamic and some of the important content
> may be fetched by XHRs? Do you expect that the full dependency information
> is available a priori, before the main response is served to the client?
>

Thanks for the link! I will take a closer look.

About dynamic content, so far we have focused on xhr content which is
fetched on initial page load. From version 1.0 up to 1.4 we have included a
learning mode that introduces small, artificial random pauses between DATA
frames of an HTTP/2 stream. We also make the  frames purposely small and
variable, say arond 512 bytes in length. Then we record the time when each
frame is delivered (as well as our server can know from the blanket of
abstractions that the OS provides). During initial page load multiple
streams are being downloaded in parallel, and the pauses help to improve
(up to a point, before the law of large numbers kicks in) overall
randomness in the process.  Therefore, on repeated loads of the same page,
frames from different streams which are almost always delivered very close
in time point out to some correlation between their streams. We sort the
correlations on the order of possible causality, and we postulate that
there is a dependency between the two streams. This works well even for xhr
content, where the browser doesn't provide explicit dependency information.

In simpler words, we use statistics to try to infer dependencies from the
browser's behavior. The only downside of our technique is that it uses too
much computing power to be able to run under 10 seconds on a developer's
laptop, because each page of the site which is significantly different to
the others needs to be fetched several times by a typical user agent, e.g.,
a browser. It would work great as a cloud service though. And it is also
good enough to run as part of CI workflows.



>
>
>> > Our team has been experimenting with H2 server push at Google for a few
>>>> > months. We found that it takes a surprising amount of careful
>>>> reasoning to
>>>> > understand why your web page is or isn't seeing better performance
>>>> with H2
>>>> > push.
>>>
>>>
>> Oh, but it is a lot of fun :-)
>>
>
> It is for us too :-), but I imagine many devs would find it frustrating.
> Hence our attempt to try to distill our experiences into a "best practices"
> doc.
>
>
>> In our experience as well the biggest performance killer of HTTP/2 Push
>> is TCP slow start and the fact that push promises are bulky. Send many of
>> them and an ACK round-trip will be needed.
>>
>
> Interesting point about needing ACK round-trips just to send the push
> promises. We hadn't run across that problem specifically. Is this because
> you're sending many push promises? Is there some reason why hpack cannot
> compress a sequence of push promises?
>
>
Yes, in our early prototypes we just wanted to see how far the technique
would take us, so we grabbed a bloated HTML template and tried to push all
of it in the order our algorithms were spitting out ;-) .


> If you've released any data about HTTP/2 Push performance in ShimmerCat,
> I'd be interested to read it. I did notice one article on your site,
> although that only dealt with one page with a fixed network connection:
> https://www.shimmercat.com/en/info/articles/performance/
>
>
What would you consider an interesting, standard measure for this case? We
are mainly interested on reducing the impact of latency on page load time.
So we tend to measure how the page load time decreases  for a client  with
latency around 120 ms. That's easy to standardize, but the other variable
is the website itself and that one is harder to standardize. In our
consultancy projects we tend to set a 30 to 50% improvement in load time
and time-to-start-reading over baseline as an achievable target goal, but
each project is quite unique :-( .

Received on Monday, 8 August 2016 20:05:19 UTC