W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: #540: "jumbo" frames

From: Willy Tarreau <w@1wt.eu>
Date: Wed, 25 Jun 2014 12:30:18 +0200
To: David Krauss <potswa@gmail.com>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20140625103018.GJ5531@1wt.eu>
On Wed, Jun 25, 2014 at 06:20:18PM +0800, David Krauss wrote:
> On 2014?06?25, at 5:35 PM, <K.Morgan@iaea.org> <K.Morgan@iaea.org> wrote:
> > Bad idea IMO.  That would really paint HTTP/2 into a corner.  With no
> > reserved bits left, there would never be a chance to go above 64K frames.
> > i.e. you could never " back-port bigger frames onto an existing protocol"
> IPv6 packets only go up to 64K, so no network processor is going to get away
> with coarser granularity for the foreseeable future.

Except that intermediaries are not packet processors, they are proxies and
they work with TCP streams. So they perform a read() call, retrieve as much
as they can from the system buffers, apply some processing, perform a send()
call and sleep until they're woken up again to signal that the buffer is
available again and/or that more data is available to read. That's exactly
this high frequency sleep/wake cycle that is performance critical. A lot of
syscall overhead and scheduling overhead for very little data processing.

When you have the ability to wake up to read 1 MB, and send it immediately,
at least you don't feel like you were woken up for nothing. Don't forget that
at 100 Gbps, this operation on 1 MB buffers is performed 12000 times a second!
At 64kB, that's 200000 times a second that you stop/start.

Received on Wednesday, 25 June 2014 10:30:45 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:31 UTC