- From: Martin Thomson <martin.thomson@gmail.com>
- Date: Fri, 3 Jan 2014 10:16:59 -0800
- To: Daniel Sommermann <dcsommer@fb.com>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>
On 2 January 2014 16:11, Daniel Sommermann <dcsommer@fb.com> wrote: > Should servers be limited to sending values greater than zero for > SETTINGS_MAX_CONCURRENT_STREAMS? This question also applies to SPDY. In the world I used to come from, such mechanisms were used to request that clients temporarily hold off on sending more request. This would be similar in many respects to Retry-After, but on a connection-wide scale, rather than scoped to a single resource. These sorts of facilities tend to be hugely useful in some limited scenarios. I would expect that a client that encounters a zero value treats it no differently to any other value. If the client needs to send a request, and can't because there are too many streams open, it can either fail the request immediately, or it can enqueue the request for some limited amount of time. If a stream limit causes continuing problems, it is probably advisable for the connection to be torn down. This can happen with a zero limit, or with a higher limit if a server fails to send END_STREAM properly. How clients detect this situation is probably going to be implementation dependent, but clear indicators are: excessive numbers of enqueued requests, enqueued requests timing out before even being sent, etc... I'll note that the same sort of problem can happen for pushed resources at the server, though the obvious remedy there is RST_STREAM. I can't speak for SPDY, but I imagine that principle to be portable :)
Received on Friday, 3 January 2014 18:17:28 UTC