Re: Priority implementation complexity (was: Re: Extensible Priorities and Reprioritization)

Even without a priority tree it is likely that the H3 extensible priorities
structure would cause not-yet-started responses to need to be scheduled
ahead of in-flight responses. The urgency value is effectively a
parent/child relationship.

It's not as unbounded as H2 but if you churned through a bunch of
reprioritizations with stalled streams you could cause issues for a server
that didn't protect against it.

Limiting the reprioritizations to "what stream to pick next" would help but
wouldn't solve the long download problem.

On Mon, Jun 15, 2020 at 7:44 AM Yoav Weiss <yoav@yoav.ws> wrote:

>
>
> On Mon, Jun 15, 2020 at 1:18 PM Stefan Eissing <
> stefan.eissing@greenbytes.de> wrote:
>
>>
>> Stefan Eissing
>>
>> <green/>bytes GmbH
>> Hafenweg 16
>> <https://www.google.com/maps/search/Hafenweg+16+%0D%0A48155+M%C3%BCnster?entry=gmail&source=g>
>> 48155 Münster
>> <https://www.google.com/maps/search/Hafenweg+16+%0D%0A48155+M%C3%BCnster?entry=gmail&source=g>
>> www.greenbytes.de
>>
>> > Am 15.06.2020 um 12:14 schrieb Yoav Weiss <yoav@yoav.ws>:
>> >
>> >
>> >
>> > On Mon, Jun 15, 2020 at 11:03 AM Stefan Eissing <
>> stefan.eissing@greenbytes.de> wrote:
>> >
>> > > Am 15.06.2020 um 10:28 schrieb Yoav Weiss <yoav@yoav.ws>:
>> > >
>> > >
>> > >
>> > > On Mon, Jun 15, 2020 at 9:55 AM Stefan Eissing <
>> stefan.eissing@greenbytes.de> wrote:
>> > > > Am 11.06.2020 um 10:41 schrieb Kazuho Oku <kazuhooku@gmail.com>:
>> > > >
>> > > > That depends on how much clients would rely on reprioritization.
>> Unlike H2 priorities, Extensible Priority does not have inter-stream
>> dependencies. Therefore, losing *some* prioritization signals is less of an
>> issue compared to H2 priorities.
>> > > >
>> > > > Assuming that reprioritization is used mostly for refining the
>> initial priorities of a fraction of all the requests, I think there'd be
>> benefit in defining reprioritization as an optional feature. Though I can
>> see some might argue for not having reprioritization even as an optional
>> feature unless there is proof that it would be useful.
>> > >
>> > >
>> > > > We should decide if reprioritization is good or bad, based on as
>> much data as we can pull, and make sure it's implemented only if we see
>> benefits for it in some cases, and then make sure it's only used in those
>> cases.
>> > >
>> > > When thinking about priority implementations, I recommend thinking
>> about a H3 reverse proxy in front of a legacy H1 server. Assume limited
>> memory, disk space and backend connections.
>> > >
>> > > (Re-)prioritization in H2 works well for flow control, among the
>> streams that have response data to send. Priorities can play a part in
>> server scheduling, but
>> > > it's more tricky. By "scheduling" I mean that the server has to pick
>> one among the opened streams for which it wants to compute a response for.
>> This is often impossible to re-prioritize afterwards (e.g. suicidal for a
>> server implementation).
>> > >
>> > > Can you expand on why it is "suicidal"?
>> >
>> > It is tricky to obey re-prioritizations to the letter, managing
>> memory+backend connections and protecting the infrastructure against DoS
>> attacks. The reality is that there are limited resources and a server is
>> expected to protect those. It's a (pun intended) top priority.
>> >
>> > Another priority topping the streams is the concept of fairness between
>> connections. In Apache httpd, the resources to process h2 streams are
>> foremost shared evenly between connections.
>> >
>> > That makes sense. Would re-prioritization of specific streams somehow
>> require to change that?
>> >
>> > The share a connection gets is then allocated to streams based on
>> current h2 priority settings. Any change after that will "only" affect the
>> downstream DATA allocation.
>> >
>> > I *think* this makes sense as well, assuming that by "downstream" you
>> mean "future". Is that what you meant? Or am I missing something?
>> >
>> > Also, the number of "active" streams on a connection is dynamic. It
>> will start relatively small and grow if the connection is well behaving,
>> shrink if it is not. That one of the reasons that Apache was only partially
>> vulnerable to a single issue on the Netflix h2 cve list last year (the
>> other being nghttp2).
>> >
>> > tl;dr
>> >
>> > By "suicidal" I mean a server failing the task of process thousands of
>> connections in a consistent and fair manner.
>> >
>> > Apologies if I'm being daft, but I still don't understand how (internal
>> to a connection) stream reprioritization impacts cross-connection fairness.
>>
>> *fails to imagine Yoav as being daft*
>>
> :)
>
> Thanks for outlining the server-side processing!
>
>
>> A server with active connections and workers. For simplicity, assume that
>> each ongoing request allocates a worker.
>> - all workers are busy
>> - re-prio arrives and makes a stream A, being processed, depend on a
>> stream B which has not been assigned a worker yet.
>>
>
> OK, I now understand that this can be concerning.
> IIUC, this part is solved by with Extensible Priorities (because there's
> no dependency tree).
>
> Lucas, Kazuho - can you confirm?
>
>
>> - ideally, the server would freeze the processing of A and assign the
>> resources to B.
>> - however re-allocating the resources is often not possible  (Imagine a
>> CGI process running or a backend HTTP/1.1 or uWSGI connection.)
>> - the server can only suspend the worker or continue processing, ignoring
>> the dependency.
>> - a suspended worker is very undesirable and a possible victim of a
>> slow-loris attack
>> - To make this suspending less sever, the server would need to make
>> processing of stream B very important. To unblock it quickly again. This is
>> then where unfairness comes in.
>>
>> The safe option therefore is to continue processing stream A and ignore
>> the dependency on B. Thus, priorities are only relevant:
>> 1. when the next stream to process on a connection is selected
>> 2. when size/number of DATA frames to send is allocated on a connection
>> between all streams that want to send
>>
>> (Reality is often not quite as bad as I described: when static file/cache
>> resources are served for example, a worker often just does the lookup,
>> producing a file handle very quickly. A connection easily juggles a number
>> of file handles to stream out according to priorities and stalling one file
>> on another comes at basically no risk and cost.)
>>
>> Now, this is for H2 priorities. I don't know enough about QUIC priorities
>> to have an opinion on the proposals. Just wanted to point out that servers
>> see the world a little different than clients. ;)
>>
>
> I checked and it seems like Chromium does indeed change the parent
> dependency as part of reprioritization. If the scenario you outlined is a
> problem in practice, we should discuss ways to avoid doing that with H2
> priorities.
>
>
>>
>> Cheers, Stefan
>>
>>
>> > >
>> > >
>> > > If we would do H2 a second time, my idea would be to signal
>> priorities in the HTTP request in a connection header and use this in the
>> H2 frame layer to allocate DATA space on the downlink. Leave out changing
>> priorities on a request already started. Let the client use its window
>> sizes if it feels the need.
>> > >
>> > > Cheers, Stefan (lurking)
>> >
>>
>>

Received on Monday, 15 June 2020 12:10:42 UTC