- From: Greg Wilkins <gregw@intalio.com>
- Date: Tue, 7 Oct 2014 10:01:55 +1100
- To: HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <CAH_y2NF4aLcXZfKKs2F-6NefgnN6jDryg89pcZqLM6iLCHrnrg@mail.gmail.com>
The sllght concern that I have with the priority model is that it represents a dynamic weighted priority tree that gives a level of resolution that a server will not be able to well achieve, or that it may be counter productive for a server to attempt to achieve. Asking for resource A before B does not mean that the server can actually generate A before B. Asking for a 20:80 split between A and B does not mean that those resources will can actually be generated at that rate. A server may not know if a resource is available to be served until it has allocated resources to it (eg dispatch at thread) and then the availability of that resource will often be dependent on external resources (eg database). Given that limitation, it is likely that servers may implement priorities in accordance with 5.3's Most importantly, priority can be used to select streams for transmitting frames when there is limited capacity for sending. I'm not sure this is "most importantly" - at least not for the server as enforcing priorities at the point of selecting which frames to send is not really a good way of allocating server resources. Once generation of a resource on the server has started, significant CPU/memory will have been allocated before the first byte is ready to be sent and it may be a higher server priority to let that generation run to completion rather than hold it up while it allocates more resources to any dynamically increased client priorities on other streams. But having priorities is definitely important. When confronted with a connection that has just delivered 80+ requests, the last thing a server should do is simply dispatch them all and let them smear the data from a single user over all the servers cores/caches (Jetty currently does just dispatch them all). To make servers scalable, we will definitely need to restrict requests from a single connection to a lower degree of parallelism than is possible, so we will definitely need priorities to assist with that. But any priorities result achieved by dispatch order and/or thread priority is going to only be a very rough approximation of the high resolution priority tree that 5.3 describes. So the question is, should the server also then attempt to enforce priorities when selecting the frames to send? Ie should it double down resources in an attempt to more closely match the clients priority model? Having already committed CPU and memory in an approximation of the clients priorities, should it hold up those resources and commit more to other streams to try to achieve a more precise match to an individual clients priorities? Difficult question and the answer will depend on the tradeoff a server is willing to make between individual QoS and scaling for many connections. So I think that something more than a simple High/Low/Medium priority is required as we do need to get a partial ordering of many resources. But equally, I think it is hard to say if the detail of a dynamic weighted tree is going to be overkill or worse - counter productive to try to calculate/enforce. I think a lot of experimentation is needed and that it is worthwhile giving the tree model a go.... but it would be good if we also had a fall back to simple absolute values that could be used to communicate a partial ordering. Would it be worthwhile to explicitly say that streams can be given weights against a dependency on stream 0, which would then act as an absolute priority declaration? Also, I suspect that it might also be good to have a priority frame that could send many stream priorities in a single frame - so the server could re-evaluate the tree only once, rather repeatedly as each priority frame arrives. cheers On 7 October 2014 05:20, Martin Thomson <martin.thomson@gmail.com> wrote: > On 6 October 2014 10:39, Chad Austin <caustin@gmail.com> wrote: > > If it's truly the case that PRIORITY is O(depth(parent) - depth(stream)), > > does that leave an implementation of HTTP 2's priority graph open to > denial > > of service? > > That depends on what trade-offs you want to make. > > If you really do have a tree that is 10k entries deep, then walking > what is effectively a linked list is going to cost a non-trivial > amount of time. But you don't have to maintain a linked list. As you > note, a small increase in complexity and storage allows for certain > kinds of reprioritization. > > But if you are talking about attack modes, then I'd consider just > dropping any prioritization that doesn't work out for you. > > -- Greg Wilkins <gregw@intalio.com> http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales http://www.webtide.com advice and support for jetty and cometd.
Received on Monday, 6 October 2014 23:02:24 UTC