Re: minimum value for SETTINGS_MAX_CONCURRENT_STREAMS

Your arguments are reasonable. The only question is whether or not
supporting those use cases is more important than the simplicity that
I noted on the client end. It may very well be more important, I'm
open to that. I agree with Martin that we can probably quickly arrive
to a decision in Zurich.

On Sun, Jan 5, 2014 at 1:10 PM, Roberto Peon <grmocg@gmail.com> wrote:
> 0 is little different from k when you need n and n >> k.
>
> A setting of 0 will be useful in dos-like scenarios where one may wish to
> slow down the rate of connection attempts or new requests.
>
> In non http use cases it may be perfectly reasonable that the "server"
> declare that it only makes requests, and the "client" only answers them.
>
> -=R
>
> On Jan 3, 2014 10:56 AM, "William Chan (陈智昌)" <willchan@chromium.org> wrote:
>>
>> >From a descriptive standpoint, Chromium doesn't have special code
>> here. If it sees concurrency go down to 0, it'll just queue all
>> pending HTTP requests for that HTTP/2 connection. No timeout or
>> anything (we have very few timeouts in our HTTP stack, relying instead
>> *mostly* on OS level timeouts).
>>
>> As for whether or not it makes sense for us to do this heuristic
>> detection of server issues, we'd rather not go down that path if
>> possible. It makes our lives easier and improves interop (fewer
>> heuristics == good). But I guess we'd have to consider what options
>> the server has then. The server already has HTTP 5XX status codes, and
>> the 420 HTTP status code (lol jpinner), not to mention the HTTP/2
>> error code ENHANCE_YOUR_CALM (probably used by a RST_STREAM here,
>> unless you actually wanted to tear down the entire connection).
>>
>> So, my *preference* would be to disallow a value of 0, purely from a
>> selfish make my life easier perspective. But I'm open to
>> server/intermediary folks saying they need to be able to set this
>> setting to 0.
>>
>> On Fri, Jan 3, 2014 at 10:16 AM, Martin Thomson
>> <martin.thomson@gmail.com> wrote:
>> > On 2 January 2014 16:11, Daniel Sommermann <dcsommer@fb.com> wrote:
>> >> Should servers be limited to sending values greater than zero for
>> >> SETTINGS_MAX_CONCURRENT_STREAMS? This question also applies to SPDY.
>> >
>> > In the world I used to come from, such mechanisms were used to request
>> > that clients temporarily hold off on sending more request.  This would
>> > be similar in many respects to Retry-After, but on a connection-wide
>> > scale, rather than scoped to a single resource.
>> >
>> > These sorts of facilities tend to be hugely useful in some limited
>> > scenarios.
>> >
>> > I would expect that a client that encounters a zero value treats it no
>> > differently to any other value.  If the client needs to send a
>> > request, and can't because there are too many streams open, it can
>> > either fail the request immediately, or it can enqueue the request for
>> > some limited amount of time.
>> >
>> > If a stream limit causes continuing problems, it is probably advisable
>> > for the connection to be torn down.  This can happen with a zero
>> > limit, or with a higher limit if a server fails to send END_STREAM
>> > properly.  How clients detect this situation is probably going to be
>> > implementation dependent, but clear indicators are: excessive numbers
>> > of enqueued requests, enqueued requests timing out before even being
>> > sent, etc...
>> >
>> > I'll note that the same sort of problem can happen for pushed
>> > resources at the server, though the obvious remedy there is
>> > RST_STREAM.
>> >
>> > I can't speak for SPDY, but I imagine that principle to be portable :)
>> >
>>
>

Received on Sunday, 5 January 2014 22:07:22 UTC