W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: HEADERS and flow control

From: David Krauss <potswa@gmail.com>
Date: Thu, 22 May 2014 00:01:20 +0800
Cc: Mark Nottingham <mnot@mnot.net>, Roberto Peon <grmocg@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <576B9269-9C5E-4A6E-8F37-1632B9825C3A@gmail.com>
To: Greg Wilkins <gregw@intalio.com>, Michael Sweet <msweet@apple.com>

On 2014–05–21, at 9:50 PM, Greg Wilkins <gregw@intalio.com> wrote:

> 
> On 21 May 2014 09:57, David Krauss <potswa@gmail.com> wrote:
> The user has no motivation to get around flow control.
> 
> Perhaps not today, but what about in 10 years time or 20 years time???
> 
> Plus, I can already consider situations where it is desirable now.  Any time a HTTP/2.0 connection is used for shared traffic (perhaps from a connection aggregator or even just multiple client tabs all accessing a common host/service), then bypassing flow control can give an unfair slice of the multiplexed channel.  Indeed not only are headers not flow controlled, they cannot be interleave with other frames!

The solution (which I mentioned) is to limit the size of the decoded header set. This is already necessary to keep an implementation stable. At least, that’s sufficient if the length of the block is limited by the size of the set. If duplicates are allowed, then the encoding of a header set may be arbitrarily large. If an intermediary doesn’t catch such an exploit, then I can see commandeering the connection from a reverse proxy to a server this way.

But enabling flow control would only promote an abuse. The decoded header set should not consume unbounded resources, or more than a typical connection window, no matter what. The restriction which is needed (or really, only deserves mention in the spec), is stronger than flow control.


On 2014–05–21, at 10:32 PM, Michael Sweet <msweet@apple.com> wrote:

> Even with the current header compression, there is no reason to prevent intervening DATA frames, or to omit HEADER frames from the scheduling/queuing algorithms that clients and servers must implement for HTTP/2.  The only requirement based on the header compression algorithm that has been adopted is that there can only be a single set of HEADER frames in flight in either direction, and I don't think that is a big step beyond what is already required.


Decoding takes resources even with a stateless compressor, because the in-flight header set needs to exist in memory. With the current algorithm, the server can at least hope that having a complete header set, and beginning to service the request, will free up resources to handle a new header set.


Received on Wednesday, 21 May 2014 16:01:59 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:30 UTC