- From: 陈智昌 <willchan@chromium.org>
- Date: Tue, 14 Jan 2014 17:31:34 -0800
- To: Jeff Pinner <jpinner@twitter.com>
- Cc: Roberto Peon <grmocg@gmail.com>, Amos Jeffries <squid3@treenet.co.nz>, HTTP Working Group <ietf-http-wg@w3.org>
On Mon, Jan 13, 2014 at 8:17 PM, Jeff Pinner <jpinner@twitter.com> wrote: > A few unstructured thoughts about prioritization: > > 1) I think the idea of moving from priority levels to a dependency structure > is a clear improvement. It mirrors both the discussion in Seattle and even > the initial ideas back in San Francisco when moving to a 31-bit priority > field. > > 2) I think there is merit to the idea of being able to have "groupings" of > priorities, whether that is through weighting, pointers, the "ordered" flag, > etc. I'm not positive how best to communicate this and how to make sure > implementations avoid starvation, but I think it helps proxies, which brings > me to thought #3. > > 3) I think there is work to be done to figure out how to make priorities > proxy-able. Flow control and push are fairly easy to proxy. With flow > control you can assign window sizes to proxied connections based on whatever > fairness mechanism you want. With push you can reject, cache, or translate > pushed responses. With priority it becomes trickier which is another reason > I favor the "weighted dependency tree" approach. Otherwise you have to > figure out how to merge the levels and it becomes harder to transmit the > clients dependency tree through the proxy to the server. I think that assumes that everything fits into the same dependency tree. But if you have just individual dependency lists, with the head of each list carrying the weight, then adjusting weights at the proxy to appropriately share bandwidth is reasonable. > > 4) I feel uneasy about changing the eventually synchronous nature of the > stream lifecycle or needing to hold onto state until I receive peer > acknowledgement. If I am an intermediary and I am trying to translate > priorities do I need to wait for synchronization events on both connections? > > So is the proposed scheme an improvement? Well, I think that depends on what > the behavior is when I drop nodes as soon as the stream closes. If the > answer is that it irreparably breaks the dependency tree, then really I get > more information from the client using flat levels because I lose the anchor > for that part of the graph. > > So I guess the tl;dr , after you've read my far too long rambling thoughts, > is > > I think we should be moving to a dependency based priority scheme similar to > the one described, but anything that requires synchronizing stream state for > maintaining required invariants probably isn't viable. This is helpful input. I'm a bit skeptical we can reach a conclusion before Zurich, but I hope to discuss the pros/cons of garbage collection vs state synchronization there. > > But again, thanks so much for actually writing all of this down as a draft! > :) > > > On Mon, Jan 13, 2014 at 7:55 PM, William Chan (陈智昌) <willchan@chromium.org> > wrote: >> >> All priorities are advisory... >> >> But herein lies the problem. If servers don't respect prioritization >> information enough, then clients may be put in a situation where the >> optimal strategy is to be heuristic about prioritization and hedge >> their bets about the server respecting the advisory prioritization >> info. >> >> Stepping back a bit, what do you think about the prioritization scheme >> overall? Is it a net improvement, and you're just commenting on the >> part you'd like to see changed? And do you have a better alternative >> proposal? We picked a mechanism as a straw man to get discussion >> going. I'm open to changing it if there's a better alternative. I >> don't consider the existing prioritization scheme a better >> alternative. >> >> On Mon, Jan 13, 2014 at 7:49 PM, Jeff Pinner <jpinner@twitter.com> wrote: >> > So what should the receiver do if it receives a dependency for a garbage >> > collected node? >> > >> > >> > On Mon, Jan 13, 2014 at 4:35 PM, Roberto Peon <grmocg@gmail.com> wrote: >> >> >> >> It doesn't rely on it-- it uses it as a better hinting that state can >> >> be >> >> discarded. >> >> One can always discard it, but when one does for an unacked node, one >> >> has >> >> knowledge that it might still be used in the future. One should garbage >> >> collect ack'd nodes with preference. >> >> >> >> -=R >> >> >> >> >> >> On Mon, Jan 13, 2014 at 4:24 PM, Jeff Pinner <jpinner@twitter.com> >> >> wrote: >> >>> >> >>> Thanks for the draft Will! >> >>> >> >>> My main concern here is that this relies on a synchronous view of >> >>> stream >> >>> state. I would prefer a mechanism that becomes consistent as the >> >>> stream >> >>> state synchronizes instead of relying on an explicit mechanism >> >>> (END_STREAM_ACK). >> >>> >> >>> >> >>> On Mon, Jan 13, 2014 at 1:57 PM, William Chan (陈智昌) >> >>> <willchan@chromium.org> wrote: >> >>>> >> >>>> Hey Amos, thanks for taking a look! >> >>>> >> >>>> Sorry for the use of both FIN_ACK and END_STREAM_ACK. The original >> >>>> document used FIN_ACK, but we renamed to be consistent with the >> >>>> changing of SPDY FIN flags to HTTP/2 END_STREAM flags. I will rename >> >>>> the FIN_ACK in a future update to this document. >> >>>> >> >>>> On Mon, Jan 6, 2014 at 1:55 PM, Amos Jeffries <squid3@treenet.co.nz> >> >>>> wrote: >> >>>> > Taking a brief look through this it does look like a better form of >> >>>> > prioritization than before. >> >>>> > >> >>>> > Two things and out to me: >> >>>> > >> >>>> > * PUSH streams can be depended on by non-PUSH streams and vice >> >>>> > versa. >> >>>> > >> >>>> > Possibly leading to a messy situation when an intermediary rejects >> >>>> > the >> >>>> > PUSH'ed resource(s). >> >>>> >> >>>> I don't know if there's anything special here for push that wouldn't >> >>>> be addressed generally by how one handles the RST_STREAM case. >> >>>> >> >>>> > >> >>>> > >> >>>> > * what happens to the dependencies if a depended-on stream gets RST >> >>>> > instead >> >>>> > of FIN_ACK ? >> >>>> > >> >>>> > Particularly relevant for the PUSH case above, but it can also >> >>>> > happen >> >>>> > anytime. >> >>>> >> >>>> I think RST_STREAM should be treated similarly to END_STREAM_ACK. >> >>>> Originally, there was a dependency list. Now, a node has been removed >> >>>> from that linked list. Either the policy is to automatically >> >>>> re-connect the list, or the broken list becomes a new list. And >> >>>> explicit policy can be signaled via PRIORITY frames. >> >>>> >> >>>> > >> >>>> > >> >>>> > Security considerations would need to mention the possibility: >> >>>> > >> >>>> > * that an intermediary drops the FIN_ACK frames (or never sends >> >>>> > them). >> >>>> > >> >>>> > It would seem prudent to simply make >> >>>> > a) the recipient ignore any priority information if the >> >>>> > depended-on >> >>>> > stream >> >>>> > has already completed from its viewpoint, and >> >>>> > b) the sender not indicate dependencies on streams already >> >>>> > finished >> >>>> > sending. >> >>>> >> >>>> I'm not sure if this is a security consideration, but I think it's >> >>>> correct policy. END_STREAM_ACK is intended to eliminate races here. >> >>>> Before its receipt, you can always receive references to a >> >>>> depended-on >> >>>> stream, even if it in the CLOSED state. So its location in the >> >>>> dependency list is still important and can't be ignored. After >> >>>> receipt >> >>>> of an END_STREAM_ACK, any future references to that stream are >> >>>> PROTOCOL_ERRORs. >> >>>> >> >>>> > >> >>>> > >> >>>> > * that a naive server can consume processing capacity on the client >> >>>> > simply >> >>>> > by forcing rearrangements of the dependency state data. >> >>>> > >> >>>> > Some measure of protection is made to prevent multiple updates in >> >>>> > one >> >>>> > PRIORITY frame, but consecutive PRIORITY frames with repeating >> >>>> > updates >> >>>> > is >> >>>> > not handled. >> >>>> > Transaction preempt may involve significant memory context shifts >> >>>> > if >> >>>> > the >> >>>> > client is an intermediary. Too-frequent re-prioritization from the >> >>>> > server >> >>>> > can trigger this overhead. Same can happen from multiplexing, but >> >>>> > at >> >>>> > least >> >>>> > some data transfer occurs each frame to offset the inefficiency. >> >>>> >> >>>> I agree this is a general DoS consideration. As with many other >> >>>> control frames, the server has to be cognizant of abuse. I think >> >>>> http://http2.github.io/http2-spec/#rfc.section.10.5 already handles >> >>>> this, and even already mentions PRIORITY frames. >> >>>> >> >>>> > >> >>>> > Amos >> >>>> > >> >>>> > >> >>>> >> >>> >> >> >> > > >
Received on Wednesday, 15 January 2014 01:32:02 UTC