Re: Design Issue: Max Concurrent Streams Limit and Unidirectional Streams

Yes, it does become a bit tricky here. Not quite sure exactly what the
solution ought to be. One possible approach would be for the
intermediary to use flow-control mechanisms to effectively rate limit
the clients requests. For instance, if the intermediary allows the
client to open 10 concurrent streams, and the client opens and
half-closes those streams at too high of a rate without giving the
server time to properly respond, the intermediary can hold new streams
for a period of time or reject the new streams until the server
catches up.

Alternatively, a secondary MAX_OPEN_STREAMS parameter can be used. The
sender would be limited to initiating no more than
MAX_CONCURRENT_STREAMS open outbound streams at any given time and
would be allowed to initiate new streams after half-closing; however,
all open and half-open streams would count against the separate
MAX_OPEN_STREAMS limit. If that limit has not been hit, the newly
initiated streams are accepted, otherwise they are either held or
rejected until the half-open streams are closed. The assumption is
that MAX_OPEN_STREAMS >= MAX_CONCURRENT_STREAMS.

Yes, with low MAX_OPEN_STREAMS values we still run the risk of
starving out pushed streams, so the idea would be to use
MAX_OPEN_STREAMS intelligently (and possibly allow the value to change
dynamically through the life of a session) so that we avoid the issue.

These, of course, aren't the only possibilities :-)

- James

On Thu, Apr 25, 2013 at 4:18 PM, Martin Thomson
<martin.thomson@gmail.com> wrote:
>> On Thu, Apr 25, 2013 at 4:11 PM, Martin Thomson
>> <martin.thomson@gmail.com> wrote:
>>> Do you mean that only outward bound streams count toward the
>>> concurrency limit.  That could be workable; it's certainly easier to
>>> explain.
>
> On 25 April 2013 16:13, James M Snell <jasnell@gmail.com> wrote:
>> Yes, Outward bound only.
>
> This has the biggest impact on servers and intermediaries.  How do
> they feel about having clients initiating more requests while the
> server is sending responses.
>
> Thinking on this more, it does add an interesting pipelining-like
> problem.  If all I'm doing is sending GET requests, then I can
> probably open up thousands of streams, but the server can only respond
> to a limited subset of those requests, holding requests (or responses)
> in waiting until the response logjam frees up.  I think that this is
> an undesirable property of the solution.  (MAX_CONCURRENT_STREAMS
> could then look very much like HTTP/1.1 with pipelining.)

Received on Thursday, 25 April 2013 23:45:51 UTC