Re: Limiting allowable pre-SETTINGS requests

On Jun 5, 2014, at 5:27 PM, David Krauss <potswa@gmail.com> wrote:

> 
> On 2014–06–06, at 12:10 AM, Jason Greene <jason.greene@redhat.com> wrote:
> 
>> Recent threads have discussed how more limited servers, perhaps running on embedded devices, or resource constrained intermediaries, could use SETTINGS to reduce the decoding overhead both in terms of table size, and depending on the output of #485, potentially usage of huffman.
> 
> Even embedded servers are heavier than the corresponding clients, and Huffman takes very very little resources — literally a few bytes of RAM.

I guess this is moot since that issue was closed. Although, clients can certainly be heavier (e.g. the printer case), and there is a real per token CPU cost. 

> 
>> However, even with this capability, these servers are still required to process requests up until the client has received and processed the SETTINGS frame. I understand that this is purposefully not a full negotiation due to the RTT cost. However, since there is also no form of flow-control on HEADER frames, a client can (and likely will) send as much as it can when the connection is initially created. Depending on the stickiness of traffic patterns (or more precisely lack thereof), this could be a significant volume of traffic.
> 
> Sending a “significant volume of traffic” to a “more limited server” will certainly bring it down due to its narrow pipe, regardless of what the installed software does. (Which is, to reject requests outside a very limited scope or causing any problems, and do it early, regardless of what SETTINGS may say is legal.)

First, that’s an argument against the table size setting in the first place. My point is that the setting doesn’t do it’s purported job when you have lots of short-lived connections pipelining many requests. Second, a resource constrained intermediary need not necessarily mean a narrow pipe. In fact it could the the opposite, a load balancer with a fat pipe, and a ton of traffic to process, and so the bottleneck is CPU and memory.


>> Should there perhaps be some cooperative limitation on the amount of request data that can be sent before receiving the first SETTINGS frame (i.e a temporary flow-control)? 
> 
> It sounds like you’re proposing a limited header frame/block size. We don’t have that, regardless of when SETTINGS is applied.

I was more highlighting the problem and brainstorming potential solutions that don’t involve the RTT hit of a full negotiation which would completely solve it. I wasn’t specifically worried about large headers, but rather clients opening a connection dumping lots of requests on it, and then dropping the connection never actually using the SETTINGS frame. 

Sorry if my explanation was poor.

> 
> Four kilobytes should be plenty for a proxy to route a stream and relieve the buffering pressure by streaming as HPACK was designed to do, but someone mentioned proxies peeking at cookies too. It seems that we need a closer look at what kind of implementation handles which specific use case. These issues aren’t specific to extra-simple servers.

Due to the delta-compression scheme, streams will always have to be decoded and re-encoded unless the proxy is nothing more than a tcp tunnel. Reverse proxies and load balancers are more advanced than that, and typically need to split traffic by path/host and examine certain headers. So they have to pay that processing cost, with the table size setting being the only way to limit/reduce it. If it was reduced to 0, a proxy could reduce everything down to a scan and copy.
> 
>> An alternative could be for an implementation to ignore the request content and use Retry-After in some scheme to force adoption of the SETTINGS values. However, it becomes guess-work on specifying the time amount.
> 
> An alternative would be to drop the connection, perhaps with a canned error 431 page.

The issue with that is that it implies that the request can’t be retried, but really whats desired is that the clients send a request that conforms to the requested SETTINGS.
 
--
Jason T. Greene
WildFly Lead / JBoss EAP Platform Architect
JBoss, a division of Red Hat

Received on Friday, 6 June 2014 18:47:39 UTC