- From: Patrick Meenan <patmeenan@gmail.com>
- Date: Sun, 5 May 2019 09:52:50 -0400
- To: Andy Green <andy@warmcat.com>
- Cc: Ian Swett <ianswett@google.com>, Patrick McManus <mcmanus@ducksong.com>, Lucas Pardue <lucaspardue.24.7@gmail.com>, Amos Jeffries <squid3@treenet.co.nz>, HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <CAJV+MGzxN9BErY+TN2scJyT2brzb1UZFhtHgqdhLruQRGx8fKA@mail.gmail.com>
Using flow control from the client to try to schedule resource delivery doesn't work. It's a bit like pushing on a string, causes HOL blocking and there is no way to do it without a 1 RTT delay between each action. Every case we've considered using flow control as a way to control the streams has ended up being worse for performance. The latest that comes to mind was to use it for scheduling image delivery on low-bandwidth connections by pausing streams after the first few KB of each image is delivered but by the time the WINDOW_UPDATE makes it to the server there is enough data already in-flight to negate the benefit. Assume the simple case of resources A, B and C that you want delivered in that order. 1 - Client sends requests for A, B and C along with WINDOW_UPDATEs for B and C to pause them 2 - Server responds with A 3 - Client sends WINDOW_UPDATE for B to start streaming it 4 - Server responds with B 5 - Client sends WINDOW_UPDATE for C to start streaming it 6 - Server responds with C That's functionally identical to HTTP/1.1 where the request for A is sent first followed by requests for B and C, all over the same connection with HOL blocking for A and delays between each resource. The same applies with sending a high-priority request (X) while other low-priority responses are already in-flight. If you pause the other streams then you introduce whatever back-end delay there is in generating/fetching X as a gap in the data transfer. If you wait for X to start streaming and then pause the other streams then you are introducing a 1 RTT delay in increasing the priority of X. In both cases, resuming the other streams after X completes introduces another 1 RTT delay. If X can't fully saturate the connection then you are also artificially limiting the transfer. To be able to fully saturate the connection, most clients open up pretty large windows right at the start and just let TCP flow control take over on the connection as a whole. I'm sure there were use cases in mind when the flow control was added to the spec but at least from a browser/web content perspective it would be easier if it didn't exist in the HTTP/2 layer and was just handled at the transport layer. On Sat, May 4, 2019 at 10:29 AM Andy Green <andy@warmcat.com> wrote: > > > On 04/05/2019 14:26, Patrick Meenan wrote: > > > tx credit round robin is basically the default HTTP/2 prioritization of > > even weighting across streams, isn't it? If so and the content being > > I think you may have missed the point. > > PRIORITY isn't implemented in lws, but it does implement tx credit more > or less. And you can simulate PRIORITY by the client using tx credit > modulation to control how much of what can come at any given time. > > > served is web pages to a browser it can be as much as an order of > > magnitude slower than if the priorities were honored (Chrome mitigates > > this a bit currently by holding back requests because...wait for > > it....the current state of priority support is pretty bad). > > Yeah... maybe there's a reason for that. > > > If you have a typical page with a few blocking scripts and stylesheets > > in the head, a bunch of images (some visible, some not) and more scripts > > at the end of the document, delivering them in priority order lets the > > browser start rendering the content by just loading the > > scripts/stylesheets in the head (and then prioritizing the images). If > > you round-robin all of the streams the page will be blank until > > everything finishes downloading and then it will finally display. Not > > only is that an order of magnitude slower compared with HTTP/2 servers > > that do support priorities, it's also MUCH slower than HTTP/1.1. > > ... order of magnitude eh... > > > I'm sure there are cases where it doesn't matter (and maybe those are > > the cases where libwebsockets is used) but it is absolutely critical for > > No it isn't. Nothing breaks... everything works just fine without it. > You can quite happily trade off its complexity and memory footprint by > ignoring it and eating some reduction in speed the first time the site > is visited. There's a clientside cache in the browser case that can be > populated once and will likely stay there a year or whatever. Then much > of this struggle and complexity turns out to be over "prioritizing" > which few bytes of a 304 you get first after the first access. > > In the cases there are advantages to controlling the ordering, PRIORITY > and all points south like "stream reprioritization", priority trees, etc > are artifacts and states that live in the client and can stay there, > simply driving client decisions about stream WINDOW_UPDATE emission to > get almost the same result implicitly. > > > browser <-> server connections. There's also a good chance that your > > users don't realize it isn't supported or don't realize it isn't working > > (which is entirely believable given the current situation > > < > https://github.com/andydavies/http2-prioritization-issues#cdns--cloud-hosting-services > >). > > Lws operates at the very low end, down to things like esp32. In that > world, the expense of the tls tunnel is really high (even with mbedtls, > > 32KB per tunnel just for buffers on machines with < 200KB global > heap). The tunnels are slow to set up with large keys in the certs too. > Using h2 to mux inside one tls context is already a huge win. In > these case optimizing ordering is unimportant. > > -Andy > > > On Sat, May 4, 2019 at 1:23 AM Andy Green <andy@warmcat.com > > <mailto:andy@warmcat.com>> wrote: > > > > > > > > On 03/05/2019 21:26, Ian Swett wrote: > > > > > > On Tue, Jan 29, 2019 at 3:12 PM Patrick McManus > > <mcmanus@ducksong.com <mailto:mcmanus@ducksong.com> > > > <mailto:mcmanus@ducksong.com <mailto:mcmanus@ducksong.com>>> > wrote: > > > > > > > > > On Tue, Jan 29, 2019 at 2:56 PM Patrick Meenan > > <patmeenan@gmail.com <mailto:patmeenan@gmail.com> > > > <mailto:patmeenan@gmail.com <mailto:patmeenan@gmail.com>>> > wrote: > > > > > > As far as I can tell, the placeholder streams serve to > handle > > > the Firefox use case of using idle streams for groupings, > > > > > > > > > yes.. and you can probably solve for that in a simpler way > by > > > having an explicit set of groups with simple ways to share > > between them. > > > > > > But what I think you really need to do with your proposal is > > address > > > what you're giving up by removing the tree structure because > > it was > > > an explicit choice to include it. > > > > > > That structure exists because Google convinced the WG that it > was > > > important to be able to combine an arbitrarily large number > > of sets > > > of streams together fairly. (and the solution allowed > generalized > > > sharing, not just fairness). > > > > > > > > > This thread got far ahead of me, but I wanted to ask for more > > motivation > > > behind the tree structure(links welcome). 'Google' may have > > argued for > > > it, but that doesn't mean it was ever used as envisioned. Is > anyone > > > else taking advantage of it? > > > > libwebsockets supports h2 server, h2 client with stream bundling, and > > ws-over-h2 server... it ignores PRIORITY and I've only ever been > asked > > about it one time. It just uses a round-robin scheduler between > > streams > > that have tx credit + more to send to allow them to write frames on > the > > network connection. > > > > > In short, if you've got a set of streams from tabs A, B, and > > C you > > > cannot really expect them to be coordinated in an absolute > > priority > > > sense - but if they were all rooted at the same level in a > > tree they > > > could share fairly and then the streams within the tab could > > locally > > > coordinate their priority. > > > > > > > > > For a given application(browser, app, etc), I'd expect absolute > > priority > > > to be a fairly good indicator across connections, because that's > > easy > > > and the alternatives are harder. > > > > PRIORITY and the stream tx credit scheme have an almost complete > > overlap. If the default stream tx credit is small, or it's updated > to > > walk the credit back after opening the stream, the client can use > > modulation of that per-stream to enforce the detailed priority it > wants > > without PRIORITY or trees of PRIORITY or whatever being an explicit > > thing on the wire told to the server at all. > > > > And since properly managing tx credit is a bug-magnet, from that > > perspective it would've been better to exercise and re-use that > instead > > of PRIORITY. > > > > So IMHO, the whole of PRIORITY is a white elephant. > > > > > This is a much more important property in an aggregator like > > a CDN > > > who might be bringing different front end connections into a > > single > > > backend connection.. the priority expressed by the client > should > > > exist in some ways e2e (css before imgs!), but in other ways > > hop to > > > hop (you don't want every css to stall every browser's > > images).. the > > > tree allows that. > > > > > > > > > This statement concerns me for a few reasons. One is I doubt any > > CDNs > > > can pull this off at scale, so I don't think it's practical. > > Someone > > > should correct me if I'm wrong. Another is that to pull this > > off, you'd > > > need reliable ways to know that a single user was the owner of two > > > different connections, which seems potentially concerning from a > > privacy > > > perspective? Lastly, I don't think it would result in optimal > > loading. > > > If one could do this, strict numerical priorities would likely > work > > > better, because they'd preserve most of the clients original > intent > > > instead of equally sharing bandwidth between blocking > resources(ie: > > > HTML, CSS) and non-blocking ones(ie: images). > > > > If the CDN had the information to do that well, it can also express > it > > by stream tx credit modulation. > > > > -Andy > > >
Received on Sunday, 5 May 2019 13:53:25 UTC