W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: #540: "jumbo" frames

From: Willy Tarreau <w@1wt.eu>
Date: Wed, 25 Jun 2014 06:55:31 +0200
To: Mark Nottingham <mnot@mnot.net>
Cc: Poul-Henning Kamp <phk@phk.freebsd.dk>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20140625045531.GC10406@1wt.eu>
Hi Mark,

On Wed, Jun 25, 2014 at 02:10:46PM +1000, Mark Nottingham wrote:
> <https://github.com/http2/http2-spec/issues/540>
> 
> On 24 Jun 2014, at 11:24 pm, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote:
> 
> > In message <20140624102030.GC25779@1wt.eu>, Willy Tarreau writes:
> > 
> >> Instead, (I remember that it was already discussed here in the past), I
> >> really think we'd need to support large frames for large data sets, that
> >> are typically usable for such sites which have few streams per client but
> >> very large ones.
> > 
> > I agree.
> > 
> > Moving objects which are trivially megabytes and gigabytes in size
> > using 16kB frames just doesn't make any sense, in particular not
> > given that MTUs above 9K are actively being discussed again.
> 
> 
> The simplest way to address this would be to un-reserve the first two bits of
> the length <http://http2.github.io/http2-spec/#rfc.section.4.1>; that would
> get us back up to 64k, letting Willy get to 35 Gbps. Given that 256k only got
> him to 40 Gbps, that seems like a reasonable stopping point for HTTP/2.

Please note something I probably didn't mention, but I was limited to
40 Gbps by the NICs, and above 256kB I started to get idle CPU again.
Other people I know doing tests with a few more NICs have already
reached slightly more than 60 Gbps. In fact building a lab at these
rates is not something you do with commodity hardware every day, so
people tend to use what they already have in their racks and try to
get the most out of it.

> Personally, I think that's a reasonable change -- the argument for 14 bits
> was to make sure people didn't abuse multiplexing and that they properly
> implemented continuation, etc.; I think those goals could be met by some
> prose in the spec and proper testing.
> 
> Doing more than 16 bits would take a lot more back-and-forth in the WG, and
> is likely to encounter a lot of resistance from implementers, from what I've
> seen.

I think we could proceed differently and that would avoid people abuse the
frame size. Instead we'd use a single bit to indicate if the size is in
bytes or in 4k pages. Yes that allows to reach 64MB per frame. But that also
means that when you want to use this, you don't have the byte granularity
anymore so the temptation to use it will only be for those sending huge
files and for whom having to send the last chunk separately is a detail
compared to the gains of sending large chunks.

> WRT the "jumbo" frame (i.e., flagging that some prefix of the payload is
> actually an extension length) -- this sort of hack is necessary to back-port
> bigger frames onto an existing protocol. Let's not do it out of the gate.

The proposal above could be much simpler in fact.

> Those are just my personal impressions. I'd very much like to hear from other
> folks in the WG what they think.
> 
> Regards,
> 
> --
> Mark Nottingham   https://www.mnot.net/

Regards,
Willy
Received on Wednesday, 25 June 2014 04:56:00 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:31 UTC