extensible prioritization scheme review

Reading https://httpwg.org/http-extensions/less-h2/draft-ietf-httpbis-priority.html again, I have comments to the draft. Thank you for considering my thoughts.


Small correction:
7.1:
"When the PRIORITY_UPDATE frame applies to a request stream, clients SHOULD provide a Prioritized Stream ID that refers to a stream in the "open", "half-closed (local)", or "idle" state."

I believe this is "half-closed (remote)" instead?



The question below I find not answered, really.

14. "Why use an End-to-End Header Field?"

"The way a client processes a response is a property associated to that client generating that request. Not that of an intermediary. Therefore, it is an end-to-end property."
...
"Having the Priority header field defined as end-to-end is important for caching intermediaries."


I fail to see that. Given the complexities of HTTP Caching and "Vary" handling, this needs more detail.

Clients process responses differently. How can the processing properties of Google Chrome at a certain point in time versus the ones from Firefox at another point in time apply to a server resource? It also depends in which context the resource is loaded. If an image is embedded in one HTML page, its Priority: properties will vary from it being embedded in another HTML page.

That is why I have difficulties to apply `Priority:` to the server response meta data. If an intermediary (CDN) cache wants to persist response priorities, it is certainly free to do to so. But that seems outside HTTP.

Would it not be better if servers can send PRIORITY_UPDATE Frames to the client, if indeed server's changes to Priority should be made known to the client side of the connection?



My main issue is around the requirement that non-incremental responses should pre-empt any other. As in:

10.
"Therefore, non-incremental responses of the same urgency SHOULD be served in their entirety, one-by-one, based on the stream ID, which corresponds to the order in which clients make requests."
...
"It is RECOMMENDED that servers avoid such starvation where possible."


Slowl0ris comes again. If a server sticks to the "SHOULD" of serving responses to priority as described (or as I read it), it seems trivial for a malicious client to hog server resources:
- start several streams with small window sizes
- send a "Priority: u=1" request for a Websocket resource located at the server. That one will never terminate and block all responses that have already been scheduled.

I can understand clients saying "I need this CSS resource NOW! Stop wasting my bandwidth on other things!". But the solution of this spec to only send the response for the CSS resource assumes that it can make full use of the available bandwidth. This will not always be the case. A cached CSS resource might need revalidation and, while this is done, nothing else is sent.

Stating "It is RECOMMENDED that servers avoid such starvation where possible.", means that - in real deployments - server implementations *need* to ignore SHOULDs in this spec. This seems not a good approach to protocol design.

When HTTP makes a reboot of Priorities, we should be sure to address such scenarios or the outcome in the end will again be not-satisfactory to clients who need to shave off milliseconds on page paints.



Replacing H2's priority dependency tree with something simpler and defined for H2+H3 is very welcome. Thank you for the work put into this.

Kind Regards,
Stefan

Received on Thursday, 18 November 2021 11:21:07 UTC