Re: Priority implementation complexity (was: Re: Extensible Priorities and Reprioritization)

On Mon, Jun 15, 2020 at 11:03 AM Stefan Eissing <
stefan.eissing@greenbytes.de> wrote:

>
> > Am 15.06.2020 um 10:28 schrieb Yoav Weiss <yoav@yoav.ws>:
> >
> >
> >
> > On Mon, Jun 15, 2020 at 9:55 AM Stefan Eissing <
> stefan.eissing@greenbytes.de> wrote:
> > > Am 11.06.2020 um 10:41 schrieb Kazuho Oku <kazuhooku@gmail.com>:
> > >
> > > That depends on how much clients would rely on reprioritization.
> Unlike H2 priorities, Extensible Priority does not have inter-stream
> dependencies. Therefore, losing *some* prioritization signals is less of an
> issue compared to H2 priorities.
> > >
> > > Assuming that reprioritization is used mostly for refining the initial
> priorities of a fraction of all the requests, I think there'd be benefit in
> defining reprioritization as an optional feature. Though I can see some
> might argue for not having reprioritization even as an optional feature
> unless there is proof that it would be useful.
> >
> >
> > > We should decide if reprioritization is good or bad, based on as much
> data as we can pull, and make sure it's implemented only if we see benefits
> for it in some cases, and then make sure it's only used in those cases.
> >
> > When thinking about priority implementations, I recommend thinking about
> a H3 reverse proxy in front of a legacy H1 server. Assume limited memory,
> disk space and backend connections.
> >
> > (Re-)prioritization in H2 works well for flow control, among the streams
> that have response data to send. Priorities can play a part in server
> scheduling, but
> > it's more tricky. By "scheduling" I mean that the server has to pick one
> among the opened streams for which it wants to compute a response for. This
> is often impossible to re-prioritize afterwards (e.g. suicidal for a server
> implementation).
> >
> > Can you expand on why it is "suicidal"?
>
> It is tricky to obey re-prioritizations to the letter, managing
> memory+backend connections and protecting the infrastructure against DoS
> attacks. The reality is that there are limited resources and a server is
> expected to protect those. It's a (pun intended) top priority.
>
> Another priority topping the streams is the concept of fairness between
> connections. In Apache httpd, the resources to process h2 streams are
> foremost shared evenly between connections.


That makes sense. Would re-prioritization of specific streams somehow
require to change that?


> The share a connection gets is then allocated to streams based on current
> h2 priority settings. Any change after that will "only" affect the
> downstream DATA allocation.


I *think* this makes sense as well, assuming that by "downstream" you mean
"future". Is that what you meant? Or am I missing something?

Also, the number of "active" streams on a connection is dynamic. It will
> start relatively small and grow if the connection is well behaving, shrink
> if it is not. That one of the reasons that Apache was only partially
> vulnerable to a single issue on the Netflix h2 cve list last year (the
> other being nghttp2).
>
> tl;dr
>
> By "suicidal" I mean a server failing the task of process thousands of
> connections in a consistent and fair manner.
>

Apologies if I'm being daft, but I still don't understand how (internal to
a connection) stream reprioritization impacts cross-connection fairness.


> >
> >
> > If we would do H2 a second time, my idea would be to signal priorities
> in the HTTP request in a connection header and use this in the H2 frame
> layer to allocate DATA space on the downlink. Leave out changing priorities
> on a request already started. Let the client use its window sizes if it
> feels the need.
> >
> > Cheers, Stefan (lurking)
>
>

Received on Monday, 15 June 2020 10:15:27 UTC