Re: Priority implementation complexity (was: Re: Extensible Priorities and Reprioritization)

On Fri, Jun 19, 2020 at 7:10 PM Tom Bergan <tombergan@chromium.org> wrote:

> I didn't follow those details. I think it would be helpful to summarize
> the API you're referring to.
>
> This might be naive: While there are potentially tricky implementation
> issues, as discussed by Kazuho and Patrick earlier, and potentially tricky
> scheduling decisions, as discussed by Stefan, I'm not seeing how those
> translate into API problems. Generally speaking, at the HTTP application
> level, a request doesn't really exist until the HEADERS arrive (example
> <https://golang.org/pkg/net/http/#Handler>; all other HTTP libraries I'm
> familiar with work in basically the same way). At that point, the request
> has an initial priority, defined either by the HEADERS, or by the
> PRIORITY_UPDATE, if one arrived before HEADERS and there's no Priority
> field. Further PRIORITY_UPDATEs can be delivered with whatever event
> mechanism is most convenient (callbacks, channels, etc).
>

The quiche HTTP/3 library layer provides a poll() method that an
application calls. This queries readable transport streams and tries to
read requests (a complete headers list). Today, communicating the initial
priority is easy, I just pass the Priority header that I received. The
application chooses the priority to send responses by providing a desired
priority in the send_request() method - there is no coupling between the
request and response priority.  send_request() more or less passes
everything through to the QUIC library which manages the packetization of
STREAM frames once data is queued up. Minimal state is required in the
HTTP/3 library layer, once a response is started the application just tries
to fill the transport stream send buffer (via the H3 layer) as quickly as
the client drains it. When the application is ready to complete the
response, it send the last piece of data with a FIN. The application can
then forget about it. At this stage only the transport maintains stream
state, because it is not complete until the client reads the remaining
data. If we deem reprioritization as useful, then it needs to be supported
through the full lifetime of the stream.

Adding PRIORITY_UPDATE requires some more work in my HTTP/3 library layer.
One question that comes to mind is if the application cares about the full
sequence of PRIORITY_UPDATES, or if it is fine to skip/collapse them.
Before the request has been poll()ed out it seems sensible to buffer
PRIORITY_UPDATE, and then only present the most recent one. To call this
the initial priority is a slight fib, "most recent priority at the time you
discovered the request existed" is more apt but this is splitting hairs.
The one concern I would have is where the priority header and most-recent
value disagree. By passing both priorities out to the application I'm
making it responsible for picking.

Witholding a reprioritization event until after the request has been
poll()ed helps a bit. But I think there is not a clean way to deal with
reprioritization events after the application is done with the stream; if
the application is nominally done with processing the request all it can do
is tell the transport to behave differently. What's the point in that?
Attempting to explain the oddities caused by QUIC's behavior is part of the
API problem IMO.


> We also haven't mentioned reprioritization of server push. The client
>> cannot control the initial priority of a pushed response and there is an
>> open issue about the default priority of a push [2]. In that thread we are
>> leaning towards defining no default priority and letting a server pick
>> based on information *it* has. However, Mike Bishop's point about
>> reprioritizing pushes is interesting [3]. To paraphrase, if you consider
>> the RTT of the connection, there are three conditions:
>>
>> a) the push priority was low: so no data was sent by the time a
>> reprioritization was received at the server. It is possible to apply the
>> reprioritization but importantly, the push was pointless and we may as well
>> have waited for the client to make the request.
>> b) the push priority was high, response size "small": so all data was
>> sent by the time a reprioritization was received at the server. The
>> reprioritization was useless.
>> c) the push priority was high, response size "large": some data sent at
>> initial priority but at the time a reprioritization is received at the
>> server, the remaining data can be sent appropriately. However, anecdotally
>> we know that pushing large objects is not a good idea.
>>
>> If we agree to those conditions, it makes for a poor argument to keep
>> reprioritization of server push. But maybe there is data that disagrees.
>>
>
> FWIW, I have the opposite interpretation. We can't ignore case (a) by
> simply saying that "the push was pointless and we may as well have waited
> for the client". That assumes the server should have known the push would
> be pointless, but in practice that conclusion depends on a number of
> factors that can be difficult to predict (size of other responses,
> congestion control state, network BDP). Sometimes push is useful, sometimes
> it's not, and when it's not, we should gracefully fallback to behavior that
> is equivalent to not using push at all. From that perspective, case (a) is
> WAI.
>
> This lack of a graceful fallback is a big reason why push can be such a
> footgun. Frankly, if pushes cannot be reprioritized in this way, then IMO
> push is essentially dead as a feature (and it's already on rocky ground, as
> it's so hard to find cases where it works well in the first place).
>

That's a fair opinion too. Have you any thoughts about server push
reprioritization being a motivating factor for maintaining the feature?

Unfortunately I don't have a server push API that I can use to speculate
about reprioritization, although I suspect that I'd have similar problems
determining when a pushed response was "done", with the added complication
that something would need to maintain state to map push IDs to stream IDs.

Cheers
Lucas

Received on Friday, 19 June 2020 21:44:30 UTC