RE: willchan's thoughts on continuations, jumbo frames, etc after *only skimming* the threads

Will, thanks for the summary.  This captures a lot of what we have been talking about internally as well.  We actually find that supporting continuation will likely be simpler than some of the alternatives.  An example of this would be a large response from an application.  Our current platforms allow for very large headers and creating a limit here may create incompatibility for some existing applications.  In this case, the app does not know if the header violates the max frame size until after we run it though the compressor.  At that point, we have changed the compressor state and will either be forced to create an ¡°unwind¡± to roll back the state or a ¡°preview¡± where the compressor estimates the size before we commit to full compression.  If we run it through the compressor, we cannot even send a status code to tell the client to downgrade.  Again, based on the complexity of the code we would have to put into place to mitigate potential issues like these, continuation is actually simpler.  Also, in Greg¡¯s proposal, the server is not in control of the max header frame size and I suspect that most browsers will set this to whatever default we choose here.

For data frames, we are fine with allowing large data frames, although I am not sure we will ever use them.  Until there is much more data on large frames, we will likely just keep the default.  We will wait until we have customer demand to have a single connection where we are multiplexing large uploads / downloads with small high priority traffic.  IE will likely not have foreknowledge of the type of traffic and will set data frame size to the default.  We will not expose complicated knobs in our app platforms until there is real customer demand for it.

Our feedback would be, more settings seem like added complication and CONTINUATION is simpler than coding for some of the failure cases for existing applications.

Thanks!!

-Rob


From: willchan@google.com [mailto:willchan@google.com] On Behalf Of William Chan (???)
Sent: Thursday, July 10, 2014 6:32 PM
To: HTTP Working Group
Subject: willchan's thoughts on continuations, jumbo frames, etc after *only skimming* the threads

I've told many folks on this list privately, but basically, the amount of email discussion on all this stuff has been too much for me to keep up with. I've tried to catch up, but I suspect I missed some discussion. I actually never even read jpinner's proposal. So maybe he has good stuff for me to read, but sorry, I didn't have time to read everything. I'm going to state my thoughts, and they may be wrong because I've missed context. Apologies if so. Please point out the relevant email discussing this when rebutting my point and I'll go read it.

I want to separate out certain discussions, even though they may share the same underlying mechanisms. Actually, I'll just state my goals:

(1) Ensure interoperability
(2) Try to preserve interactiveness of different streams (reduce HOL from a single stream)
(3) Mitigate DoS / resource usage

For (1), one of the concerns foremost on my mind with these discussions is not breaking *existing* uses of large headers. I know they suck, but I do not consider outright breaking compat with these *existing* HTTP/1.X large headers as acceptable. If you run a large site using a reverse proxy with many many backends, then it can be difficult to switch to HTTP/2 at that reverse proxy if it breaks compatibility for certain services. I think I've seen 64kb floated as a reasonable max header block size. If all our relevant server/proxy folks agree with that, then I have no problem with it.

(2) is multifaceted because this HOL blocking can come in both headers and data. Let's break them down to (a) and (b).
  (a) should be fixed if we have a max headers size. If there's consensus around 64kb, then we're done there. Is there consensus? It wasn't obvious to me.
  (b) AIUI, in Greg et al's proposal, there's a default small (64kb) frame size, and it can be increased at the receiver's discretion. Since the receiver is in control here, that seems fine to me. I'm a bit disappointed by extra configuration and the resulting complexity, but it's clearly tractable and I think it's a reasonable compromise. As a sender myself, I can make sure not to screw up interactivity on the sending side. Having the control as a receiver to force smaller frames (and thereby *mostly* encourage less HOL blocking at the HTTP/2 layer) is enough for me. I do not consider this optimal, but I think it's acceptable.

(3) Greg, et al's proposal mitigates a number of DoS issues. That said, Roberto's highlighted to me the importance of being able to fragment large header blocks using multiple frames, in order to reduce the proxy buffering requirements. This is basically what CONTINUATION is used for. And the key distinction between CONTINUATION and jumbo header frames is that CONTINUATION allows for reduced buffering requirements in comparison to jumbo header frames, since you can fragment into multiple frames. Clearly, this incurs extra complexity. So we have a complexity vs buffering requirements tradeoff. IMHO, and that's without being an expert in the area, the complexity strikes me as very tractable. It honestly doesn't seem like that big a deal. I've heard complaints about CONTINUATIONS allowing a DoS vector, but as Greg has pointed out, it only allows as much of a DoS vector as jumbo header frames allow. And if we cap at 64kb anyway, then whatevs. It's really the code complexity that's different. And therein lies the tradeoff, at least AFAICT. I think the complexity increase is minor enough that, if people like Roberto think that the reduction in buffering requirements for applications that want to be able to flush after only processing some headers, then whatevs. The complexity increase is minor, so that's fine by me.

I think I've covered everything I've seen discussed in relation to the CONTINUATIONs and jumbo frames and what not. I may have gotten the arguments wrong since I only skimmed everything. If so, please correct me.

In other words, I think I'm mostly fine with Greg et al's proposal if they bring back CONTINUATIONs (so we get fragments and thus reduced buffering requirements in *certain* cases) but keep the header block capped at whatever level is enough to mitigate interoperability issues. I'd like to kill off as many settings as possible, but if we need that compromise, I'm willing to accept it.

Cheers,
Will

PS: Apologies again for any oversights. I only skimmed the threads, so I'm sure I've gotten some things wrong.

Received on Friday, 11 July 2014 06:47:50 UTC