Re: #540: "jumbo" frames

As implementors of a couple of version of h2, I would much prefer to keep
the core frame header processing as simple as possible.  The idea of
introducing a variable length integer there moves a very simple read of 8
bytes to a read, process, then possible ( and probably rarely ) read some
more to get the size.  It is not that it is necessarily hard or costly,
but it is work that seems unnecessary.  I understand the throughput
concerns and think Willyıs suggestion of the 15th or 16th bit indicating
that the remaining size is in larger blocks ( size TBD ) is a great one.
I could get behind that.  I still have general complexity concerns around
h2 in general, but this seems a minor compromise for a larger good.

-stephen

On 6/25/14, 6:45 AM, "Adrian Cole" <adrian.f.cole@gmail.com> wrote:

>I'll offer another implementer pov.
>
>TL;DR;
>
>I'd be personally in favor of a jumbo frame, and am happier with using
>a variable length integer encoding to deal with the unbounded nature
>of it. Even if variable length integer is a rats nest, I'd prefer we
>exhaust more energy on this one before giving up.
>
>
>Continuations "feel" similar in code to concatenating data frames (to
>equal a content length). Except that the latter seems more immediately
>sensible as other frames can be interleaved. When you look at how to
>statefully (carry end of stream flag, etc) page through continuations
>under hpack, it indeed feels complicated and offers little value to
>the *code*, basically makes it a spot that bugs are likely to creep
>into. Jumbo is simpler, provided we address the length concern, plus
>we can always bolt on constraints like a max on that. IOTW, the
>framing itself is the more important bit.
>
>Knowing that pinner's in hawaii, I'm sure we'll get at least another
>implementer perspective a little later.
>
>-A
>
>On Wed, Jun 25, 2014 at 5:23 AM, Matthew Kerwin <matthew@kerwin.net.au>
>wrote:
>> K.Morgan@iaea.org <K.Morgan@iaea.org> wrote:
>>> So barring a change to 64 bits, you need
>>> the CONTINUATION hack or the "jumbo
>>> frame" hack.  Which hack is better?
>>
>> My original motivation for jumbo frames was that I wanted to break
>> hpack's blocking behaviour, by only compressing the HEADERS (or PP)
>> frame and making each CONTINUATION stand alone. Because some
>> individual headers are known to be >16K I needed a way to extend
>> CONTINUATION frames (and only them). Thus there'd be only one hack,
>> only useful for and only used by folks who insist on sending huge
>> headers -- and incidentally easy to detect and apply social
>> back-pressure against.
>>
>> Where this has gone, it feels to me too much like a hacky workaround
>> with too many shortfalls and ragged corners. My opinion is that we
>> either bite the bullet and extend the common length field out to N*
>> bits (with settings for receiving variable-length frames
>> [max_data_length, max_headers_length, max_settings_length?] to keep
>> things sane), thus simultaneously fixing your data throughput concerns
>> and rendering CONTINUATION unnecessary, or suck it up and keep
>> everything the way it is.
>>
>> I don't know whether it's better here to bite or suck. I suspect
>> people will have opinions.
>>
>>
>> *32, 64, whatever -- something everyone considers "big enough" for the
>> forseeable future.
>>
>> --
>>   Matthew Kerwin
>>   http://matthew.kerwin.net.au/
>>
>

Received on Wednesday, 25 June 2014 13:57:42 UTC