- From: Greg Wilkins <gregw@intalio.com>
- Date: Fri, 11 Jul 2014 15:39:26 +1000
- To: HTTP Working Group <ietf-http-wg@w3.org>
- Cc: William Chan (陈智昌) <willchan@chromium.org>
- Message-ID: <CAH_y2NHwh=VBuXxkZBk+=QPHqke7wXexJSSJDbJm9scRoFUkhw@mail.gmail.com>
Will, sorry for the volume of emails and thanks for the reasonable summary. It is precisely because of concerns about (1) interoperability that I and others recommenced our anti-CONTINUATION campaign. The intent of the "Greg et al" proposal is primarily to get rid of CONTINUATIONS, so increasing the max frame size and introducing some settings to limit it, while good things to do, will not stop the disquiet in the WG. The issue being that with have two entirely different mechanisms to fragment and stream application data: 1. DATA* 2. (HEADERS|PUSH_PROMISE)CONTINUATION* It is entirely unacceptable to have this second mechanism when it is only required for 0.001% of traffic (even less if we accept a 16 bit frame size) and has significantly different characteristics and can disrupt the QoS of the first. The implementations that do not wish to support large headers have declared they will not implement CONTINUATIONs, but this gives us a (1) interoperability problem as it is possible to use CONTINUATIONS for small headers. It was proposed several times to make CONTINUATIONs only allowed for large headers, but these were rejected so that they are necessary to support to ensure (1). Hence we are in the mess that we are in. Whilst I fundamentally agree with Roberto's desire to be able to fragment/interleave headers, I fail to see how a WG that sanctioned the creation of h2-13 can suddenly decide that QoS/DOS from large headers is an issue that MUST be addressed, but only when considering the "Greg at al" proposal? if the WG does decide that headers MUST be fragmentable and interleaveable, then the solution is to send them over a segment in DATA frames (or a HEADER frame that works exactly like DATA frames), not to have 2 separate data streaming mechanisms in the one protocol. If there really is no consensus on the >=24bit Greg et al proposal, then I guess I could live very discontently with: - 16 bit frames - max frame/header size in settings - CONTINUATIONs only for headers >64KB - implementation that have max header size <= 64KB need not implement CONTINUATIONs ...and yes I do realise that this consigns CONTINUATIONs more or less to the scrap heap.... but that's what you get when you design entire complex mechanisms for tiny fractions of the user base. regards On 11 July 2014 11:32, William Chan (陈智昌) <willchan@chromium.org> wrote: > I've told many folks on this list privately, but basically, the amount of > email discussion on all this stuff has been too much for me to keep up > with. I've tried to catch up, but I suspect I missed some discussion. I > actually never even read jpinner's proposal. So maybe he has good stuff for > me to read, but sorry, I didn't have time to read everything. I'm going to > state my thoughts, and they may be wrong because I've missed context. > Apologies if so. Please point out the relevant email discussing this when > rebutting my point and I'll go read it. > > I want to separate out certain discussions, even though they may share the > same underlying mechanisms. Actually, I'll just state my goals: > > (1) Ensure interoperability > (2) Try to preserve interactiveness of different streams (reduce HOL from > a single stream) > (3) Mitigate DoS / resource usage > > For (1), one of the concerns foremost on my mind with these discussions is > not breaking *existing* uses of large headers. I know they suck, but I do > not consider outright breaking compat with these *existing* HTTP/1.X large > headers as acceptable. If you run a large site using a reverse proxy with > many many backends, then it can be difficult to switch to HTTP/2 at that > reverse proxy if it breaks compatibility for certain services. I think I've > seen 64kb floated as a reasonable max header block size. If all our > relevant server/proxy folks agree with that, then I have no problem with it. > > (2) is multifaceted because this HOL blocking can come in both headers and > data. Let's break them down to (a) and (b). > (a) should be fixed if we have a max headers size. If there's consensus > around 64kb, then we're done there. Is there consensus? It wasn't obvious > to me. > (b) AIUI, in Greg et al's proposal, there's a default small (64kb) frame > size, and it can be increased at the receiver's discretion. Since the > receiver is in control here, that seems fine to me. I'm a bit disappointed > by extra configuration and the resulting complexity, but it's clearly > tractable and I think it's a reasonable compromise. As a sender myself, I > can make sure not to screw up interactivity on the sending side. Having the > control as a receiver to force smaller frames (and thereby *mostly* > encourage less HOL blocking at the HTTP/2 layer) is enough for me. I do not > consider this optimal, but I think it's acceptable. > > (3) Greg, et al's proposal mitigates a number of DoS issues. That said, > Roberto's highlighted to me the importance of being able to fragment large > header blocks using multiple frames, in order to reduce the proxy buffering > requirements. This is basically what CONTINUATION is used for. And the key > distinction between CONTINUATION and jumbo header frames is that > CONTINUATION allows for reduced buffering requirements in comparison to > jumbo header frames, since you can fragment into multiple frames. Clearly, > this incurs extra complexity. So we have a complexity vs buffering > requirements tradeoff. IMHO, and that's without being an expert in the > area, the complexity strikes me as very tractable. It honestly doesn't seem > like that big a deal. I've heard complaints about CONTINUATIONS allowing a > DoS vector, but as Greg has pointed out, it only allows as much of a DoS > vector as jumbo header frames allow. And if we cap at 64kb anyway, then > whatevs. It's really the code complexity that's different. And therein lies > the tradeoff, at least AFAICT. I think the complexity increase is minor > enough that, if people like Roberto think that the reduction in buffering > requirements for applications that want to be able to flush after only > processing some headers, then whatevs. The complexity increase is minor, so > that's fine by me. > > I think I've covered everything I've seen discussed in relation to the > CONTINUATIONs and jumbo frames and what not. I may have gotten the > arguments wrong since I only skimmed everything. If so, please correct me. > > In other words, I think I'm mostly fine with Greg et al's proposal if they > bring back CONTINUATIONs (so we get fragments and thus reduced buffering > requirements in *certain* cases) but keep the header block capped at > whatever level is enough to mitigate interoperability issues. I'd like to > kill off as many settings as possible, but if we need that compromise, I'm > willing to accept it. > > Cheers, > Will > > PS: Apologies again for any oversights. I only skimmed the threads, so I'm > sure I've gotten some things wrong. > -- Greg Wilkins <gregw@intalio.com> http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales http://www.webtide.com advice and support for jetty and cometd.
Received on Friday, 11 July 2014 05:39:56 UTC