RE: #38 - HTTP2 min value for server supported max_concurrent_streams


As Martin said, 1 seems overly restrictive.

My major concern is not the value of the number, but that we have a minimum value and the default be the same as the minimum.  Otherwise, we leave the race hole open then we are just increasing complexity.

1.       Client will have to track negative allowance (because it did not know how many requests it allowed to send)

2.       Server has to promise that RST_STREAM due to max_concurrent_stream overflow did not have any side effects

o   The server should verb agnostic (i.e. GET vs POST) and just look at some streamCount variable.

o   Otherwise, client will have to pend all non-idempotent requests until it gets the SETTINGS frame from the server

3.       Client will have to resubmit the request into its queue to be sent when the allowance opens up

4.       If the ¡°blind¡± request(s) (i.e. sent before client received the SETTINGS frame) have entity-body then client

o   Must wait until the server¡¯s SETTINGS frame before sending entity-body OR

o   Be able to regenerate the entity-body when the ¡°blind¡± request is RST_STREAMed

¡ì  This means the layer on top of client stack needs to be able to handle a ¡°retry¡± error and resubmit the entity-body OR

¡ì  The client stack buffers all the entity-body, as it converts it into DATA frames, until it knows that the request won¡¯t get RST_STREAM due to max_concurrent_stream

o   Or just blow up and complain to the user

In general, I would prefer if we made HTTP/2.0 to not have such races to begin with instead of piling on complexity to react to the races.


From: willchan@google.com [mailto:willchan@google.com] On Behalf Of William Chan (???)
Sent: Friday, February 22, 2013 2:36 PM
To: Yoav Nir
Cc: Martin Thomson; Roberto Peon; Osama Mazahir; ietf-http-wg@w3.org Group
Subject: Re: #38 - HTTP2 min value for server supported max_concurrent_streams

We always have to examine what the choices end up being for which parties. If servers end up limiting parallelism, or requiring roundtrips to ramp up parallelism, then clients which want speed (browsers) will be incentivized to simply open up more connections to bypass the low parallelism limit or slow start.

Overall, I think it's better to tolerate the minor suboptimality of having servers RST_STREAM streams if they don't want so much parallelism, rather than incentivize browsers to open more connections.



On Fri, Feb 22, 2013 at 2:19 PM, Yoav Nir <ynir@checkpoint.com<mailto:ynir@checkpoint.com>> wrote:

On Feb 22, 2013, at 6:16 PM, Martin Thomson <martin.thomson@gmail.com<mailto:martin.thomson@gmail.com>> wrote:

> On 22 February 2013 05:18, Roberto Peon <grmocg@gmail.com<mailto:grmocg@gmail.com>> wrote:
>> Why 1?
>
> 1 seems a little restrictive, especially since 6 concurrent
> connections is the current expectation in many browsers.
Defaulting to 1 allows for a simple server that never has to handle multiple concurrent streams, one that can be implemented with much fewer lines of code, but is still compliant. Great for serving software updates, large files, CRLs, etc. Not so great for web pages.

Other servers will quickly raise the limit via a SETTINGS frame.

Yoav

Received on Friday, 22 February 2013 23:46:10 UTC