Re: Priority implementation complexity (was: Re: Extensible Priorities and Reprioritization)

Thanks for the different perspectives on this. Quoting isn't going to work
so great so I'll pick out some points from this and the parent thread:

Google have added a flag [1] to Chrome that allows toggling of H2
reprioritization, some experimental work is happening with this. Thanks!

We've talked a bit about how priorities might affect both scheduling server
work, and selecting the bytes to emit from the server's response send
queue. I agree with Kazuho that we don't want to specify much about the
internals of server applications. However, there are some DoS
considerations (depending on the outcome of the repriority discussion), so
looking ahead we might find it is useful to capture anything already not
already covered in the spec.

The server's role in optimizing the use of available bandwidth is an
interesting perspective to take, especially considering the client's
responsibility for providing flow control updates. In the basic HTTP/3
priority implementation of the quiche library, the application processes
the priority header and provides that information when sending response
data. Internally the library uses an implementation-specific API method to
set the priority of the transport stream; this does account for the
properties Kazuho mentioned i) it returns an error if the stream ID exceeds
current maximum ii) it no-ops if the stream ID refers to a closed stream.
HTTP response payload data is written to each stream until the QUIC layer
tells me it cannot take any more because the send window is full. In a
separate loop, the QUIC layer selects stream data to transmit based on the
priority. The client's expedience in providing flow control updates affects
phase 1 (local buffering) but not phase 2 (emission). A client
reprioritization would affect phase 2 not phase 1. In my case, quiche's
transport priority method does account for the properties Kazuho mentioned
i) it returns an error if the stream ID exceeds current maximum ii) it
no-ops if the stream ID refers to a closed stream.

The funnies will happen with trying to accommodate reprioritization
signals:

   - Exposing the reception of a reprioritization signal (PRIORITY_UPDATE
   frame) to the application might be useful, or useless if we consider some
   of Stefan's points.
   - Reordering can cause the reprioritization to arrive before the initial
   priority. Exposing an event to the application just made things harder.
      - Reordering isn't the only concern. In quiche, when an application
      asks us to read from transport, we internally always read from
the control
      stream and QPACK streams before request streams. So we'd always pull out
      the PRIORITY_UPDATE first.
      - Exposing this scenario of reprioritization event to the application
      is mostly useless because the application has no idea of what is being
      reprioritized. If the priority is used for deciding server work,
one of the
      layers above transport needs to first validate and then remember the
      details. This means that the library needs to expose a broader
API surface
      than it already does (e.g. exposing Kazuho's properties)
      - If the transport layer API simply actions the last invoked
   priority, naively calling it when the signals were received in the "wrong"
   order means that reprioritization might be ignored.
   - If a reprioritization event is simply hair pinned back into the quiche
   library, there is an argument for not exposing it.
   - I could simply accommodate things by modifying the transport priority
   method to take a bool, is_initial. This would prevent an initial priority
   from being applied after a reprioritization. In conjunction, defining in
   the spec that initial priority is *always the header* would remove some of
   the complexity of buffering data above the transport layer.

All of this is additional consideration and speculation specific to my
implementation, applicability to others can vary. I can see how things
would be harder for implementers that attempt to manage more of the
priority scheme in the HTTP/3 layer than the QUIC one.

We also haven't mentioned reprioritization of server push. The client
cannot control the initial priority of a pushed response and there is an
open issue about the default priority of a push [2]. In that thread we are
leaning towards defining no default priority and letting a server pick
based on information *it* has. However, Mike Bishop's point about
reprioritizing pushes is interesting [3]. To paraphrase, if you consider
the RTT of the connection, there are three conditions:

a) the push priority was low: so no data was sent by the time a
reprioritization was received at the server. It is possible to apply the
reprioritization but importantly, the push was pointless and we may as well
have waited for the client to make the request.
b) the push priority was high, response size "small": so all data was sent
by the time a reprioritization was received at the server. The
reprioritization was useless.
c) the push priority was high, response size "large": some data sent at
initial priority but at the time a reprioritization is received at the
server, the remaining data can be sent appropriately. However, anecdotally
we know that pushing large objects is not a good idea.

If we agree to those conditions, it makes for a poor argument to keep
reprioritization of server push. But maybe there is data that disagrees.

Cheers
Lucas


[1] - https://chromium-review.googlesource.com/c/chromium/src/+/2232923
[2] - https://github.com/httpwg/http-extensions/issues/1056
[3] -
https://github.com/httpwg/http-extensions/issues/1056#issuecomment-593496441

Received on Friday, 19 June 2020 16:22:56 UTC