- From: Alcides Viamontes E <alcidesv@shimmercat.com>
- Date: Fri, 5 Aug 2016 21:07:56 +0200
- To: Tom Bergan <tombergan@chromium.org>
- Cc: Martin Thomson <martin.thomson@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>, Vladir Parrado Cruz <vparrado@gmail.com>, Nejc Vukovic <nejc.vukovic@gmail.com>, Ludvig Bohlin <ludvigbohlin@gmail.com>
- Message-ID: <CAAMqGzbwaFiMXy6r+r2avvv+ESG+sN0MK5FLdNZ8tB2xb=r3uA@mail.gmail.com>
> > Let's say the server wants to prioritize a subset of streams differently > than the priorities specified by the client, or differently from the > default priorities. How should it actually implement this? The simplest > implementation is to mutate the H2 priority tree directly. This makes the > H2 priority tree the single prioritization data structure in the server. > It's also attractive because H2 priorities can be communicated to lower > layers like QUIC > <https://tools.ietf.org/html/draft-hamilton-early-deployment-quic-00#section-9>. > We are aware of a few servers that update the priority tree like this, > e.g., see Apache's h2_session_set_prio > <https://github.com/icing/mod_h2/blob/master/mod_http2/h2_session.c#L1245> > . > > However, if the server does this, it has a problem: the H2 priority tree > is a shared data structure. If it makes a change, its local copy of the > data structure can be out-of-sync relative to the client's copy. A future > PRIORITY frame from the client may have a different meaning than intended > if the server has already changed its tree locally. The sentence you quoted > describes the reactions of a naive server to this problem: Maybe I can keep > the client's tree in sync by sending a PRIORITY frame? (Sorry for not > making this more clear.) Of course, this doesn't actually solve the > problem, since the server's PRIORITY frames could race with the client's. > (Note that we're not aware of any servers that actually do this; we were > just hoping to prevent any from trying.) > Hi. Great work over there. If browser and server are a bit far apart re-prioritization may arrive a bit too late to the server to be effective. Our solution for both the race conditions and the RTT problem is having our server learn the priorities and dependencies, build a delivery plan once and use it many times. In that sense, priorities and dependencies on the HTTP/2 spec as it is today is good enough for us. And the implementation complexity is about the same as implementing on-the-flight re-prioritization. > > RFC 7540 talks about another kind of race: removing closed streams from > the tree. The solution proposed by the RFC is to keep closed streams in the > tree as long as possible. The RFC does not discuss this other kind of race > -- reprioritizing streams on the server -- and this seems like something > servers are very interested in doing. AFAIK, no one has really studied the > impacts of this race nor provided guidance about how to avoid it. We don't > have any great solutions, either, we just wanted to raise the problem to be > sure that server implementors are aware of it. > This thing with closed streams takes a bit of getting used to. We have to be very careful to discard as much information as possible about closed streams as early as possible, but still keep some around for a little while to know which stream references from the browser are valid. Since we are not handling priority frames online, the amount of information we have to save is relatively small ("was this stream ever used?"), but if we were following the letter of the spec this would be a very worrying issue. > Our team has been experimenting with H2 server push at Google for a few >> > months. We found that it takes a surprising amount of careful reasoning >> to >> > understand why your web page is or isn't seeing better performance with >> H2 >> > push. >> > Oh, but it is a lot of fun :-) In our experience as well the biggest performance killer of HTTP/2 Push is TCP slow start and the fact that push promises are bulky. Send many of them and an ACK round-trip will be needed. However, HTTP/2 Push *is* useful at other times as well. For example, if the server is using cache digests via cookies and it positively knows that the browser doesn't have a script referenced at the bottom of the page, like this: ...something <script src="/jquery.js?vh=3fhhwq"></script>, it can pause the HTML stream a little bit before "<script src", send a push promise for "/jquery.js?vh=3fhhwq", and resume sending the HTML document. Chances are that the TCP window is bigger by then. Also notice a related scenario, which is a counter-pattern from the times of HTTP/1.1: instead of making a big HTML file including all parts of a page, use (the closest thing to) HTML imports. If elements of a page that seldom change like navigation bar and visual footer are made imports, then they can be cached and traffic to the server can be reduced. Nobody does that because of latency. Using HTTP/2 Push in the way described before, it becomes possible at no performance cost. Looked that way, HTTP/2 Push is a big deal for web components. And it is not far fetched, we are planning to release these features for ShimmerCat 1.7. The only thing we require from browser is that they check if there is a push promise for a resource strictly -- but as late as possible -- before starting a fetch. The same can be done with hierarchies of scripts, although we will have to wait a bit for people to stop making big .js blobs.... -- Alcides Viamontes E. Chief Executive Officer, Zunzun AB (+46) 722294542 (www.shimmercat.com is a property of Zunzun AB)
Received on Friday, 5 August 2016 19:16:08 UTC