- From: 陈智昌 <willchan@chromium.org>
- Date: Fri, 22 Feb 2013 15:56:30 -0800
- To: Osama Mazahir <OSAMAM@microsoft.com>
- Cc: Yoav Nir <ynir@checkpoint.com>, Martin Thomson <martin.thomson@gmail.com>, Roberto Peon <grmocg@gmail.com>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
- Message-ID: <CAA4WUYhTqzxLgrzoVGsKxbU=XqPgD=GexX0Mnq3m8HiZJSKBcw@mail.gmail.com>
On Fri, Feb 22, 2013 at 3:45 PM, Osama Mazahir <OSAMAM@microsoft.com> wrote: > ** ** > > As Martin said, 1 seems overly restrictive. **** > > ** ** > > My major concern is not the value of the number, but that we have a > minimum value and the default be the same as the minimum. Otherwise, we > leave the race hole open then we are just increasing complexity. > Do you feel like the complexity is that bad? In my experience, from implementing SPDY, it is not. > **** > > **1. **Client will have to track negative allowance (because it did > not know how many requests it allowed to send) > Isn't this easy? The client always has to track how many outstanding streams it has in order to respect the limit. > **** > > **2. **Server has to promise that RST_STREAM due to > max_concurrent_stream overflow did not have any side effects**** > > **o **The server should verb agnostic (i.e. GET vs POST) and just look > at some streamCount variable.**** > > **o **Otherwise, client will have to pend all non-idempotent requests > until it gets the SETTINGS frame from the server > Since RST_STREAM has an error code, this is easy to define. > **** > > **3. **Client will have to resubmit the request into its queue to > be sent when the allowance opens up > Clients already have to know how to do this due to the GOAWAY race. They also have to handle this in HTTP/1.X today. For example, if we get an error when reusing a persistent HTTP connection (e.g. TCP RST), we will resend the HTTP request over a new connection. > **** > > **4. **If the “blind” request(s) (i.e. sent before client received > the SETTINGS frame) have entity-body then client**** > > **o **Must wait until the server’s SETTINGS frame before sending > entity-body OR**** > > **o **Be able to regenerate the entity-body when the “blind” request is > RST_STREAMed**** > > **§ **This means the layer on top of client stack needs to be able to > handle a “retry” error and resubmit the entity-body OR**** > > **§ **The client stack buffers all the entity-body, as it converts it > into DATA frames, until it knows that the request won’t get RST_STREAM due > to max_concurrent_stream**** > > **o **Or just blow up and complain to the user > Again, clients already have to handle this. > **** > > ** ** > > In general, I would prefer if we made HTTP/2.0 to not have such races to > begin with instead of piling on complexity to react to the races. > As someone with experience implementing a SPDY client, I do not believe this is a big burden. If you believe it is, I would like to hear why. > **** > > ** ** > > ** ** > > *From:* willchan@google.com [mailto:willchan@google.com] *On Behalf Of *William > Chan (???) > *Sent:* Friday, February 22, 2013 2:36 PM > *To:* Yoav Nir > *Cc:* Martin Thomson; Roberto Peon; Osama Mazahir; ietf-http-wg@w3.orgGroup > *Subject:* Re: #38 - HTTP2 min value for server supported > max_concurrent_streams**** > > ** ** > > We always have to examine what the choices end up being for which parties. > If servers end up limiting parallelism, or requiring roundtrips to ramp up > parallelism, then clients which want speed (browsers) will be incentivized > to simply open up more connections to bypass the low parallelism limit or > slow start.**** > > ** ** > > Overall, I think it's better to tolerate the minor suboptimality of having > servers RST_STREAM streams if they don't want so much parallelism, rather > than incentivize browsers to open more connections.**** > > ** ** > > ** ** > > ** ** > > On Fri, Feb 22, 2013 at 2:19 PM, Yoav Nir <ynir@checkpoint.com> wrote:**** > > > On Feb 22, 2013, at 6:16 PM, Martin Thomson <martin.thomson@gmail.com> > wrote: > > > On 22 February 2013 05:18, Roberto Peon <grmocg@gmail.com> wrote: > >> Why 1? > > > > 1 seems a little restrictive, especially since 6 concurrent > > connections is the current expectation in many browsers.**** > > Defaulting to 1 allows for a simple server that never has to handle > multiple concurrent streams, one that can be implemented with much fewer > lines of code, but is still compliant. Great for serving software updates, > large files, CRLs, etc. Not so great for web pages. > > Other servers will quickly raise the limit via a SETTINGS frame. > > Yoav**** > > ** ** >
Received on Friday, 22 February 2013 23:57:01 UTC