- From: 陈智昌 <willchan@chromium.org>
- Date: Tue, 26 Feb 2013 08:29:16 -0800
- To: Osama Mazahir <OSAMAM@microsoft.com>
- Cc: Yoav Nir <ynir@checkpoint.com>, Martin Thomson <martin.thomson@gmail.com>, Roberto Peon <grmocg@gmail.com>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
- Message-ID: <CAA4WUYhmNjiU9Y0iw-hDrRg5DzT+a5AobS1XU4htR60OMJtTwg@mail.gmail.com>
On Tue, Feb 26, 2013 at 8:26 AM, William Chan (陈智昌) <willchan@chromium.org>wrote: > Thank you for continuing to raise this issue. I definitely think this is > worth discussing. I've reflected a bit on what you and others have said. If > I understand you correctly, you are primarily concerned with the races > before limits can be negotiated, and would like to see them fixed. I've > pointed out that the races existed in HTTP/1.X and still exist with things > like GOAWAY. It sounds like you'd like to fix them. I'm OK with fixing them > as long as they do not impose performance costs due to extra roundtrips to > reach appropriate parallelism. I think we disagree on acceptable code > complexity. We've already implemented this logic in Chromium and believe it > not to be burdensome. > > So, on that point, I think we may agree to disagree and see how the rest > of the working group feels. > > But, can we fix the race without imposing performance costs? Let's examine > the cases: > (1) Upgrade, where, assuming successful negotiation, the server begins > HTTP/2 in response to the HTTP/1.1 request with the Upgrade header. > (2) HTTPS negotiation via TLS-NPN style mechanism. Client speaks HTTP/2 > first. > (3) Out of band discovery like DNS, and the client starts speaking HTTP/2 > > In (1), the server speaks first and can send SETTINGS immediately. So > respecting server limits is not a concern. Respecting client limits is a > concern. I think this falls into your "handshake advertise" scenario. We > could add a HTTP header for the relevant SETTINGS during the Upgrade. This > way, the server can respect the client limits. > > In (2), the client speaks first, and will only respect server default > limits, not server specified limits since the server hasn't had a chance to > send them yet. It's conceivable we could add settings into the NPN > handshake. I'm a bit concerned about stashing so much into that handshake, > since we've also previously discussed expressing capabilities in the > handshake (e.g: supporting WebSockets over HTTP/2). If we wanted to do > something like this, we probably would need to convey such a requirement to > the TLS WG. I'm hesitant. > > In (3), since there is no negotiation, only discovery via DNS mechanisms, > we'd have to stash the settings in DNS as discussed previously, and > probably sign it too. > > I'm open to discussing conveying SETTINGS via the negotiation/discovery > mechanisms we have available, in order > On re-reading this, "open to discussing" sound rather presumptuous. Just wanted to follow up to say it's not my intent to decide what can and cannot be discussed :P I blame the early morning. > to attempt to reduce complexity. If we can reasonably prevent races via > conveying SETTINGS sooner, then great. But I still believe the defaults > should be chosen so they do not impose performance costs due to roundtrips > to raise the limits from a low default. As Patrick says, 8 is far too > small. The default should be on the order of 100. It's very common to do > large domain sharding to a CDN, and we should make sure we can handle that > case with a single connection, rather than incentivizing web devs to > continue to do domain sharding to achieve desired parallelism. > > > On Tue, Feb 26, 2013 at 12:34 AM, Osama Mazahir <OSAMAM@microsoft.com>wrote: > >> Internet Explorer has similar gymnastics. However, I don’t think that >> is just cause to reinvent the same problems again.**** >> >> ** ** >> >> In general, the problem we have is that one side initiates operations >> without knowing the peer’s limits. MaxConcurrentStreams is one example and >> negative flow control bytecounts is another (i.e. where the receiver is >> trying to advertise that it has small buffers but we shove data down its >> throat and dictate that it “MUST be prepared to receive the entire amount” >> [1]).**** >> >> ** ** >> >> Possible solutions include:**** >> >> **1. **Handshake Advertise: Advertise limits as part of >> handshake/negotiation. That way, upon session start each side knows the >> other’s limit and can guarantee that it won’t violate it. That way, we can >> simplify all parts of the protocol that are dealing with limit-exceed cases. >> **** >> >> **2. **Defaults and minimums: In the spec we pick some defaults >> and minimums so that each endpoint starts at a known initial state and each >> endpoint can thus guarantee that it won’t violate the peer’s limits. The >> initial SETTINGS frame can grow those limits. Thus, we can simplify/delete >> all the limit-exceed handling.**** >> >> **3. **Don’t fix: Not really a “solution”. We write pages of >> protocol text describing the races, how to workaround limit-exceed cases, >> code and test it, and put that burden on all future implementers.**** >> >> ** ** >> >> In short, I ask the WG to not summarily dismiss this issue. We should >> devote some energy to ensure that the protocol is robust by-design.**** >> >> ** ** >> >> [1] http://http2.github.com/http2-spec/#rfc.section.3.7.9.3**** >> >> ** ** >> >> ** ** >> >> *From:* willchan@google.com [mailto:willchan@google.com] *On Behalf Of *William >> Chan (???) >> *Sent:* Friday, February 22, 2013 3:57 PM >> *To:* Osama Mazahir >> *Cc:* Yoav Nir; Martin Thomson; Roberto Peon; ietf-http-wg@w3.org Group >> >> *Subject:* Re: #38 - HTTP2 min value for server supported >> max_concurrent_streams**** >> >> ** ** >> >> On Fri, Feb 22, 2013 at 3:45 PM, Osama Mazahir <OSAMAM@microsoft.com> >> wrote:**** >> >> **** >> >> As Martin said, 1 seems overly restrictive. **** >> >> **** >> >> My major concern is not the value of the number, but that we have a >> minimum value and the default be the same as the minimum. Otherwise, we >> leave the race hole open then we are just increasing complexity.**** >> >> ** ** >> >> Do you feel like the complexity is that bad? In my experience, from >> implementing SPDY, it is not.**** >> >> **** >> >> 1. Client will have to track negative allowance (because it did >> not know how many requests it allowed to send)**** >> >> ** ** >> >> Isn't this easy? The client always has to track how many outstanding >> streams it has in order to respect the limit.**** >> >> **** >> >> 2. Server has to promise that RST_STREAM due to >> max_concurrent_stream overflow did not have any side effects**** >> >> o The server should verb agnostic (i.e. GET vs POST) and just look at >> some streamCount variable.**** >> >> o Otherwise, client will have to pend all non-idempotent requests >> until it gets the SETTINGS frame from the server**** >> >> ** ** >> >> Since RST_STREAM has an error code, this is easy to define.**** >> >> **** >> >> 3. Client will have to resubmit the request into its queue to be >> sent when the allowance opens up**** >> >> Clients already have to know how to do this due to the GOAWAY race. >> They also have to handle this in HTTP/1.X today. For example, if we get an >> error when reusing a persistent HTTP connection (e.g. TCP RST), we will >> resend the HTTP request over a new connection.**** >> >> 4. If the “blind” request(s) (i.e. sent before client received >> the SETTINGS frame) have entity-body then client**** >> >> o Must wait until the server’s SETTINGS frame before sending >> entity-body OR**** >> >> o Be able to regenerate the entity-body when the “blind” request is >> RST_STREAMed**** >> >> § This means the layer on top of client stack needs to be able to >> handle a “retry” error and resubmit the entity-body OR**** >> >> § The client stack buffers all the entity-body, as it converts it into >> DATA frames, until it knows that the request won’t get RST_STREAM due to >> max_concurrent_stream**** >> >> o Or just blow up and complain to the user**** >> >> ** ** >> >> Again, clients already have to handle this.**** >> >> **** >> >> **** >> >> In general, I would prefer if we made HTTP/2.0 to not have such races to >> begin with instead of piling on complexity to react to the races.**** >> >> ** ** >> >> As someone with experience implementing a SPDY client, I do not believe >> this is a big burden. If you believe it is, I would like to hear why.**** >> >> **** >> >> **** >> >> **** >> >> *From:* willchan@google.com [mailto:willchan@google.com] *On Behalf Of *William >> Chan (???) >> *Sent:* Friday, February 22, 2013 2:36 PM >> *To:* Yoav Nir >> *Cc:* Martin Thomson; Roberto Peon; Osama Mazahir; ietf-http-wg@w3.orgGroup >> *Subject:* Re: #38 - HTTP2 min value for server supported >> max_concurrent_streams**** >> >> **** >> >> We always have to examine what the choices end up being for which >> parties. If servers end up limiting parallelism, or requiring roundtrips to >> ramp up parallelism, then clients which want speed (browsers) will be >> incentivized to simply open up more connections to bypass the low >> parallelism limit or slow start.**** >> >> **** >> >> Overall, I think it's better to tolerate the minor suboptimality of >> having servers RST_STREAM streams if they don't want so much parallelism, >> rather than incentivize browsers to open more connections.**** >> >> **** >> >> **** >> >> **** >> >> On Fri, Feb 22, 2013 at 2:19 PM, Yoav Nir <ynir@checkpoint.com> wrote:*** >> * >> >> >> On Feb 22, 2013, at 6:16 PM, Martin Thomson <martin.thomson@gmail.com> >> wrote: >> >> > On 22 February 2013 05:18, Roberto Peon <grmocg@gmail.com> wrote: >> >> Why 1? >> > >> > 1 seems a little restrictive, especially since 6 concurrent >> > connections is the current expectation in many browsers.**** >> >> Defaulting to 1 allows for a simple server that never has to handle >> multiple concurrent streams, one that can be implemented with much fewer >> lines of code, but is still compliant. Great for serving software updates, >> large files, CRLs, etc. Not so great for web pages. >> >> Other servers will quickly raise the limit via a SETTINGS frame. >> >> Yoav**** >> >> **** >> >> ** ** >> > >
Received on Tuesday, 26 February 2013 16:29:44 UTC