W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: Limiting allowable pre-SETTINGS requests

From: David Krauss <potswa@gmail.com>
Date: Fri, 6 Jun 2014 06:28:29 +0800
Message-Id: <2D358C90-9AF0-4B3F-AD97-3FE15E270937@gmail.com>
To: HTTP Working Group <ietf-http-wg@w3.org>

On 2014–06–06, at 12:10 AM, Jason Greene <jason.greene@redhat.com> wrote:

> Recent threads have discussed how more limited servers, perhaps running on embedded devices, or resource constrained intermediaries, could use SETTINGS to reduce the decoding overhead both in terms of table size, and depending on the output of #485, potentially usage of huffman.

Even embedded servers are heavier than the corresponding clients, and Huffman takes very very little resources — literally a few bytes of RAM.

> However, even with this capability, these servers are still required to process requests up until the client has received and processed the SETTINGS frame. I understand that this is purposefully not a full negotiation due to the RTT cost. However, since there is also no form of flow-control on HEADER frames, a client can (and likely will) send as much as it can when the connection is initially created. Depending on the stickiness of traffic patterns (or more precisely lack thereof), this could be a significant volume of traffic.

Sending a “significant volume of traffic” to a “more limited server” will certainly bring it down due to its narrow pipe, regardless of what the installed software does. (Which is, to reject requests outside a very limited scope or causing any problems, and do it early, regardless of what SETTINGS may say is legal.)

> Should there perhaps be some cooperative limitation on the amount of request data that can be sent before receiving the first SETTINGS frame (i.e a temporary flow-control)? 

It sounds like you’re proposing a limited header frame/block size. We don’t have that, regardless of when SETTINGS is applied.

Four kilobytes should be plenty for a proxy to route a stream and relieve the buffering pressure by streaming as HPACK was designed to do, but someone mentioned proxies peeking at cookies too. It seems that we need a closer look at what kind of implementation handles which specific use case. These issues aren’t specific to extra-simple servers.

> An alternative could be for an implementation to ignore the request content and use Retry-After in some scheme to force adoption of the SETTINGS values. However, it becomes guess-work on specifying the time amount.

An alternative would be to drop the connection, perhaps with a canned error 431 page.
Received on Thursday, 5 June 2014 22:28:59 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:31 UTC