On 2014-07-03 19:11, Greg Wilkins wrote:
> On 3 July 2014 16:36, Mark Nottingham wrote:
>> On 3 Jul 2014, at 4:29 pm, Amos Jeffries wrote:
>> > On 2014-07-03 00:29, Poul-Henning Kamp wrote:
>> >> PPS:
>> >>    I'm looking for co-authors for a jumboframe extension draft.
>> >
>> > Do you really need them? if you publish a sensible draft I expect
>> several of us not liking CONTINUATION would implement.
>> Would you (or any of the other Jumbo advocates) mind explaining why on 
>> its
>> own it’s better than CONTINUATION, beyond syntactic sugar?
>> AIUI you have to buffer and decode the entire header set in either 
>> case;
>> is there some other aspect where it’s significantly better?

For Mark: this list by Greg pretty much sums up my reasons. I have added 
some inline comments after some entries where we differ.

> I've been very keen to say that Continuations ARE Jumbo frames, but I 
> do
> see some key differences that go beyond syntactic sugar:
>    - The max acceptable size will be declared in a SETTING, allowing 
> for it
>    to be know in advance
>    - max frame size can be adjusted down for low resource impls.
>    - A true jumbo frame has its total size in the first frame, so an 
> impl
>    can immediately know if it is acceptable or not. With continuations, 
> you
>    might have to process 5 frames to reach a 64KB limit, only then to 
> find out
>    that their is a 6th frame and you have to throw away all that work 
> done.

This is the key reason for me supporting jumbo frames. One of the key 
benefits we/Squid would get out of HTTP/2 is to shrink the buffer 
default size down closer to 16-32KB, enough for a frame or two.
  Each time CONTINUATION occurs we would have to re-allocate the entire 
buffer and move not just the already received bytes, but after reading 
the CONTINUATION we would have to shuffle the payload bytes forward over 
the CONTINUATION frame header in order to pass the encoded bytes to 
  Jumbo frames gives us the size to reallocate the buffer to before 
reading the entire large header set into it. Cutting away N 
reallocates-and-copy cycles per request.

>    - The code path for handling large or small frames will be pretty 
> much
>    the same for jumbo frames, with the exception of the code that 
> determines
>    the frame length.   With continuations, there is a significantly 
> different
>    code path that will be executed only for headers that are larger 
> than
>    20-something KB.   This is a code base that will be rarely used and 
> thus
>    probably not well tested.  Here be dragons!
>    - Jumbo frames can also be applied to data frames (IF both the 
> endpoints
>    so desire).   This can be done to either simply tune the frame size 
> to the
>    most efficient size for the network (eg the fast hop between a load
>    balancer and application server maybe able to have a much larger 
> frame size
>    and still have good multiplexing), or it may even be used to allow a 
> single
>    frame to send very large content.

Willy T's numbers about Gbps line rate issues already covered the 
reasons this is important for high speed users. I can point at CERN here 
pumping science data over HTTP in 1TB sized chunks. Boinc applications 
are another set of HTTP based science transfers, but only pumping out 
vast numbers of GB chunks. Cutting the line speed by 96% is not going to 
make them happy with HTTP/2. Allowing their rare cases to negotiate 
larger DATA size ==> speed and more conversions.

>    - Closing stuff down is just really hard.  No its harder than that!  
> I
>    mean really really difficult!   SSL handshakes, half closed TCP
>    connections, race conditions, blah blah blah - it is just really 
> really
>    really difficult to write robust code that always closes neatly, 
> without
>    double/triple work and does not leak any resources.    Any 
> simplification
>    in this part of the code will be really really really valuable.

The way this is phrased seems a bit dramatic but in my experience and 
those of some of our users, this is probably an understatement.
  The teardown issue can be seen today in HTTP/1.1 with multiple 
persistent connections pumping at more than 20K requests per second. If 
one or more clients starts causing regular connection teardown the 
system can easily run out of available sockets and effectively DoS new 
clients, and consumes noticeably more CPU. HTTP/2 is in a prime position 
to resolve this by only requiring stream teardown, but CONTINUATION 
opens the door for broken/malicious clients to emit a single frame in 
the wrong position and there goes the whole connection (again, and 
again, ...).

>    - Jumbo frames means that the END_STREAM flag is set on the last 
> frame
>    of a stream.   It is a real WTF moment for anybody looking at the 
> spec when
>    they see CONTINUATIONS can be sent after END_STREAM.  The confusion 
> this
>    causes should not be underestimated and reading this list is a good
>    indication of the communication problems that will result.
>    - A finally - CONTINUATIONS are jumbo frames.     If it quacks like 
> a
>    duck....

... smells like a duck ...


Received on Thursday, 3 July 2014 12:42:25 UTC