W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2014

Re: #541: CONTINUATION

From: David Krauss <potswa@gmail.com>
Date: Fri, 4 Jul 2014 12:58:24 +0800
Cc: Greg Wilkins <gregw@intalio.com>, Amos Jeffries <squid3@treenet.co.nz>, HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <D78D27C6-497F-4D67-B659-00BA69A600C8@gmail.com>
To: Mark Nottingham <mnot@mnot.net>

On 2014–07–04, at 6:58 AM, Mark Nottingham <mnot@mnot.net> wrote:

> This drops us back into the jumbo argument. Allowing large DATA frames defeats the goals of multiplexing, and would only encourage clients to use multiple connections, putting us into the same situation with have with /1. While it’s easy to imagine non-browser, specialist uses of HTTP where both ends know that only one request will be outstanding at a time and therefore able to agree to this, in the general case this is not so. 

This logic sounds faulty. Reduced data frame size does not prevent an implementation from giving itself needless HOL blocking, and jumbo frames do not cause an implementation to do so.

An implementation is responsible for deciding how much data it can commit to sending before switching gears. That should be informed by the TCP window sizes.

Of course it’s possible to get tripped up, but the possibilities leading to stalls or deadlock aren’t eliminated by a fixed frame limit. They’re only even mitigated down to a given amount of data in flight. To maximize the mitigation, just let receivers declare a max frame size by SETTINGS.

Will a max frame size just re-open the need for CONTINUATION? I don’t think so. We can see that the header size distribution falls off faster than exponentially, but optimal frame size will continue to increase with network evolution. The few applications that absolutely must have 64K+ of headers can work around or deal with potentially receiving 64K or 100K frames. And again, a responsible sender shouldn’t unconditionally max out the frame size anyway.
Received on Friday, 4 July 2014 04:59:02 UTC

This archive was generated by hypermail 2.3.1 : Monday, 9 September 2019 17:48:19 UTC