Re: Priority implementation complexity (was: Re: Extensible Priorities and Reprioritization)

2020年6月10日(水) 0:20 Patrick Meenan <patmeenan@gmail.com>:

> Maybe I'm missing something but the priority updates don't need to
> coordinate across multiple data streams, just between the one stream that
> is being reprioritized and the control stream.
>
> Would something like this not work?
> - Control stream gets priority update for stream X
> - If stream X is known and the request side is complete/closed then update
> the priority as requested
>

The problem is that when a H3 server receives a reprioritization frame and
fails to find the state of the stream being designated by that frame, it
has to decide either to queue or drop the frame. As you correctly point
out, the size of the queue has to be bounded.

To determine if it should queue or drop, a server needs to have access to
both of the following states maintained by the QUIC stack:
* i) current maximum stream ID permitted to the peer
* ii) list of stream IDs that have not been closed yet

Without having access to (i), a server cannot reject reprioritization
frames specifying an unreasonable large stream ID. Without having access to
(ii), a server might start remembering information for streams that have
already been closed.

The question is if we think it is okay to require all QUIC stacks to
provide access to this information (or to provide an API that allows an
application to query if a given stream ID meets the two criteria).

I would also point out that the size of the queue should not be restricted
any further. This is because when reprioritization is considered as an
indispensable part of Extensible Priorities, a client might use the
reprioritization frame for sending initial priorities too, instead of using
the header field for indicating the initial priority.

That is what Chrome does today. If an HTML contains 100 images, and if
Chrome receives them at once, it sends 100 PRIORITY_UPDATE frames, then
sends the requests for all those images, assuming that 100 is the maximum
stream concurrency permitted by the server.

If some servers fail to implement reprioritization correctly, and if
clients rely overly on reprioritization, the negative impact on performance
could be far greater than when *not* having reprioritization. I think that
the concern that some of us have, and the reason why they (I) think
defining reprioritization as an optional feature would be a safer approach.


> - If stream X is either not known or still in the process of receiving
> request details, store the priority update for stream X in a fixed
> queue/map (size can be small but a safe size would be the max number of
> streams supported)
> - If there is already a pending priority update for stream X, discard it
> and replace it with the current priority update
> - If the pending priority update queue is full, drop the oldest and insert
> the new update
> - When a new request stream closes, check the pending priority update
> queue to see if there is an update waiting for the stream. If so, remove it
> from the queue and apply the new priority
>
> There should be no DOS concerns since the queue is fixed and small. The
> performance overhead would be trivial if we assume that out-of-order
> reprioritizations are rare (i.e. the list will almost always be empty).
>
> On Tue, Jun 9, 2020 at 10:48 AM Dmitri Tikhonov <
> dtikhonov@litespeedtech.com> wrote:
>
>> On Tue, Jun 09, 2020 at 03:15:44PM +0100, Lucas Pardue wrote:
>> > I can hypothesize that an implementation with QPACK dynamic support has
>> > already crossed the threshold of complexity that means implementing
>> > reprioritization is not burdensome. I'd like to hear from other
>> > implementers if they agree or disagree with this.
>>
>> I don't think we can judge either way.  If Alice implements QPACK and
>> Bob implement reprioritization, results will vary based on their level
>> of competence.  The degree of burden will also vary for each
>> particular implementation.  Speaking for lsquic, reprioritization
>> had to [1] touch more code and was much more tightly coupled than
>> QPACK; on the other had, QPACK encoder logic was a lot more code.
>>
>> At a higher level, I don't understand the concern with complexity.
>> If you look up "complexity" in the dictionary, you will see
>>
>>     complexity (n), see QUIC.
>>
>>   - Dmitri.
>>
>> 1. Before it was ripped out of the spec, that is, thanks a lot...
>>
>>

-- 
Kazuho Oku

Received on Wednesday, 10 June 2020 23:54:11 UTC