W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: #540: "jumbo" frames

From: Matthew Kerwin <matthew@kerwin.net.au>
Date: Wed, 25 Jun 2014 21:23:10 +1000
Message-ID: <CACweHNCuyX4ML+tJTtxeiS0Su=STArxs3fx8Vxj_37UOrV-ibA@mail.gmail.com>
To: Greg Wilkins <gregw@intalio.com>
Cc: Mark Nottingham <mnot@mnot.net>, "K.Morgan@iaea.org" <K.Morgan@iaea.org>, Poul-Henning Kamp <phk@phk.freebsd.dk>, Willy Tarreau <w@1wt.eu>, HTTP Working Group <ietf-http-wg@w3.org>, Martin Dürst <duerst@it.aoyama.ac.jp>
K.Morgan@iaea.org <K.Morgan@iaea.org> wrote:
> So barring a change to 64 bits, you need
> the CONTINUATION hack or the "jumbo
> frame" hack.  Which hack is better?

My original motivation for jumbo frames was that I wanted to break
hpack's blocking behaviour, by only compressing the HEADERS (or PP)
frame and making each CONTINUATION stand alone. Because some
individual headers are known to be >16K I needed a way to extend
CONTINUATION frames (and only them). Thus there'd be only one hack,
only useful for and only used by folks who insist on sending huge
headers -- and incidentally easy to detect and apply social
back-pressure against.

Where this has gone, it feels to me too much like a hacky workaround
with too many shortfalls and ragged corners. My opinion is that we
either bite the bullet and extend the common length field out to N*
bits (with settings for receiving variable-length frames
[max_data_length, max_headers_length, max_settings_length?] to keep
things sane), thus simultaneously fixing your data throughput concerns
and rendering CONTINUATION unnecessary, or suck it up and keep
everything the way it is.

I don't know whether it's better here to bite or suck. I suspect
people will have opinions.

*32, 64, whatever -- something everyone considers "big enough" for the
forseeable future.

  Matthew Kerwin
Received on Wednesday, 25 June 2014 11:23:40 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:31 UTC