- From: Tom Bergan <tombergan@chromium.org>
- Date: Fri, 19 Jun 2020 11:10:35 -0700
- To: Lucas Pardue <lucaspardue.24.7@gmail.com>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <CA+3+x5HRMOD_XRUpGRqFY=pttj=izswzSLdSDKuKXhAPCx6wfQ@mail.gmail.com>
On Fri, Jun 19, 2020 at 9:26 AM Lucas Pardue <lucaspardue.24.7@gmail.com> wrote: > The server's role in optimizing the use of available bandwidth is an > interesting perspective to take, especially considering the client's > responsibility for providing flow control updates. In the basic HTTP/3 > priority implementation of the quiche library, the application processes > the priority header and provides that information when sending response > data. Internally the library uses an implementation-specific API method to > set the priority of the transport stream; this does account for the > properties Kazuho mentioned i) it returns an error if the stream ID exceeds > current maximum ii) it no-ops if the stream ID refers to a closed stream. > HTTP response payload data is written to each stream until the QUIC layer > tells me it cannot take any more because the send window is full. In a > separate loop, the QUIC layer selects stream data to transmit based on the > priority. The client's expedience in providing flow control updates affects > phase 1 (local buffering) but not phase 2 (emission). A client > reprioritization would affect phase 2 not phase 1. In my case, quiche's > transport priority method does account for the properties Kazuho mentioned > i) it returns an error if the stream ID exceeds current maximum ii) it > no-ops if the stream ID refers to a closed stream. > > The funnies will happen with trying to accommodate reprioritization > signals: > > - Exposing the reception of a reprioritization signal (PRIORITY_UPDATE > frame) to the application might be useful, or useless if we consider some > of Stefan's points. > - Reordering can cause the reprioritization to arrive before the > initial priority. Exposing an event to the application just made things > harder. > - Reordering isn't the only concern. In quiche, when an application > asks us to read from transport, we internally always read from the control > stream and QPACK streams before request streams. So we'd always pull out > the PRIORITY_UPDATE first. > - Exposing this scenario of reprioritization event to the > application is mostly useless because the application has no idea of what > is being reprioritized. If the priority is used for deciding server work, > one of the layers above transport needs to first validate and then remember > the details. This means that the library needs to expose a broader API > surface than it already does (e.g. exposing Kazuho's properties) > - If the transport layer API simply actions the last invoked > priority, naively calling it when the signals were received in the "wrong" > order means that reprioritization might be ignored. > - If a reprioritization event is simply hair pinned back into the > quiche library, there is an argument for not exposing it. > - I could simply accommodate things by modifying the transport > priority method to take a bool, is_initial. This would prevent an initial > priority from being applied after a reprioritization. In conjunction, > defining in the spec that initial priority is *always the header* would > remove some of the complexity of buffering data above the transport layer. > > All of this is additional consideration and speculation specific to my > implementation, applicability to others can vary. I can see how things > would be harder for implementers that attempt to manage more of the > priority scheme in the HTTP/3 layer than the QUIC one. > I didn't follow those details. I think it would be helpful to summarize the API you're referring to. This might be naive: While there are potentially tricky implementation issues, as discussed by Kazuho and Patrick earlier, and potentially tricky scheduling decisions, as discussed by Stefan, I'm not seeing how those translate into API problems. Generally speaking, at the HTTP application level, a request doesn't really exist until the HEADERS arrive (example <https://golang.org/pkg/net/http/#Handler>; all other HTTP libraries I'm familiar with work in basically the same way). At that point, the request has an initial priority, defined either by the HEADERS, or by the PRIORITY_UPDATE, if one arrived before HEADERS and there's no Priority field. Further PRIORITY_UPDATEs can be delivered with whatever event mechanism is most convenient (callbacks, channels, etc). We also haven't mentioned reprioritization of server push. The client > cannot control the initial priority of a pushed response and there is an > open issue about the default priority of a push [2]. In that thread we are > leaning towards defining no default priority and letting a server pick > based on information *it* has. However, Mike Bishop's point about > reprioritizing pushes is interesting [3]. To paraphrase, if you consider > the RTT of the connection, there are three conditions: > > a) the push priority was low: so no data was sent by the time a > reprioritization was received at the server. It is possible to apply the > reprioritization but importantly, the push was pointless and we may as well > have waited for the client to make the request. > b) the push priority was high, response size "small": so all data was sent > by the time a reprioritization was received at the server. The > reprioritization was useless. > c) the push priority was high, response size "large": some data sent at > initial priority but at the time a reprioritization is received at the > server, the remaining data can be sent appropriately. However, anecdotally > we know that pushing large objects is not a good idea. > > If we agree to those conditions, it makes for a poor argument to keep > reprioritization of server push. But maybe there is data that disagrees. > FWIW, I have the opposite interpretation. We can't ignore case (a) by simply saying that "the push was pointless and we may as well have waited for the client". That assumes the server should have known the push would be pointless, but in practice that conclusion depends on a number of factors that can be difficult to predict (size of other responses, congestion control state, network BDP). Sometimes push is useful, sometimes it's not, and when it's not, we should gracefully fallback to behavior that is equivalent to not using push at all. From that perspective, case (a) is WAI. This lack of a graceful fallback is a big reason why push can be such a footgun. Frankly, if pushes cannot be reprioritized in this way, then IMO push is essentially dead as a feature (and it's already on rocky ground, as it's so hard to find cases where it works well in the first place).
Received on Friday, 19 June 2020 18:11:02 UTC