- From: Mike Bishop <Michael.Bishop@microsoft.com>
- Date: Mon, 7 Oct 2013 15:31:42 +0000
- To: Amos Jeffries <squid3@treenet.co.nz>, HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <06d4019afe3b4bd4932e528135f56ba8@BY2PR03MB025.namprd03.prod.outlook.com>
You might also choose to send only the bare minimum in the HTTP2-Settings header (since this will be padded onto every 1.1 request you might want to upgrade), then send a more complete SETTINGS frame once you know the server speaks 2.0. Regardless, I don’t believe it’s an omission -- it’s a decision to have the code paths be as similar as possible by always having the client send SETTINGS as part of the 2.0 connection setup, even if (in this case) it might be redundant. However, Amos, your example appears to assume that the client can change its own max-streams. Actually, each side specifies what it will allow the other peer. RFC default is 100. Client can send a SETTINGS frame allowing the server up to 1000 if it wants, but it still only gets 100 until it sees the server’s SETTINGS frame. If the server ups it to 1000, it can then send another 900 requests after getting that SETTINGS frame. You’re hitting the same issue that Gabriel has been trying to find a way around -- this works fine, so long as you only want to increase the value. As soon as you want to decrease it, the server either has to tolerate misbehavior due to the race or start resetting streams. If the server resets streams, that means the client has to maintain enough state to queue those requests for resubmission if they’re reset. Gabriel & Roberto’s draft offers a way for the server to start with a lower default than 100 so it doesn’t have to decrease it immediately; we don’t have a general-purpose solution for a peer that wants to decrease later other than allowing the errors and recovering from them. Sent from Windows Mail From: Amos Jeffries<mailto:squid3@treenet.co.nz> Sent: Saturday, October 5, 2013 8:16 PM To: HTTP Working Group<mailto:ietf-http-wg@w3.org> On 6/10/2013 9:14 a.m., William Chan (陈智昌) wrote: > On Sat, Oct 5, 2013 at 5:06 AM, Salvatore Loreto > <salvatore.loreto@ericsson.com <mailto:salvatore.loreto@ericsson.com>> > wrote: > > > while implementing the 06 draft we have discovered that > when starting HTTP/2.0 for http with the Upgrade mechanism > the client sends twice (according to Section 3.2) > > the first time it in the HTTP2-Settings header field and the second > > GET /default.htm HTTP/1.1 > Host:server.example.com <http://server.example.com> > Connection: Upgrade, HTTP2-Settings > Upgrade: HTTP/2.0 > HTTP2-Settings: <base64url encoding of HTTP/2.0 SETTINGS payload> > > > the second is after the reception of the 101 response > > Upon receiving the 101 response, the client sends a > connection header (Section 3.5), which includes a SETTINGS frame. > > > Good catch. Looks like we introduced this when adding the > HTTP2-Settings. I don't have a strong opinion here on how to fix (if > we want to fix at all) so I'll save my paint for other bikesheds. I think it is a good idea to leave it in. At best the values are identical so there is no effect on the behaviour. At worst the client may actually want to change something once it receives the server SETTINGS. A server receiving both should take the first one as the client initial SETTINGS, and the second one as an update. The client is allowed to send SETTINGS at any time. Even as the first frame after a previous SETTINGS. > > Another thing that we think should be clarified during starting > HTTP/2.0 (common to both the scenario http and http2) > is related to the second to last paragraph of Section 3.5 > > To avoid unnecessary latency, clients are permitted to send > additional frames to the server immediately after sending the client > connection header, without waiting to receive the server connection > header. It is important to note, however, that the server connection > header SETTINGS frame might include parameters that necessarily alter > how a client is expected to communicate with the server. Upon > receiving the SETTINGS frame, the client is expected to honor any > parameters established. > > > in the paragraph it is not clear what happens (what should be the > default behavior) if > the server alter how a client is expected to communicate with the > server. > This ambiguity could lead to different server side implementations > and then to > unexpected behaviors > It would be better to specif, to avoid vulnerability in server > implementations, something like the following: > "if the client exceeds any imitations of the server before the > SETTINGS is understood by the client, > the server SHOULD/MUST send GOAAWAY." > an alternative to this proposal is that the server SHOULD enforce > the local SETTINGS. > > > A peer is always going to enforce its known settings. If one side > chooses to update SETTINGS in the middle of the connection, the peer > will always be slightly out of sync. We should not treat SETTINGS > violations as connection errors. AFAICT, all defined settings have > reasonable fallback scenarios (that look to be unspecced, which we > maybe should fix). That's acceptable, although it's probably desirable > to avoid it when possible. That's why we've discussed stuff like > HTTP2-Settings and settings in ALPN, in order to eliminate those races > when possible. In the HTTP/2 Upgrade scenario, it seems to be that > it'd be desirable for the client to wait for the server's HTTP/2 > connection header (not just the HTTP/1.X Upgrade response) before > starting to send any of its own frames, in order to prevent the race. > This should not incur any HTTP/2 level roundtrips, and hopefully not > any TCP/lower level roundtrips from CWND & what not. This is starting to get into over-engineering. In the common case the server will be delivering 101 followed immediately by SETTINGS. So the client receives both either together. In the edge cases for working HTTP/2 there may be a delay. But this will likely be caused by the client holding off its own SETTINGS or network issues. In both those cases waiting for it is mandatory to avoid causing the possible network problems to get even worse. There are flow control issues surrounding the case when each peer is trying to alter an existing connections capacity limits (for max-streams or anything else). There is a MUST in there somewhere about each client only sending the *lowest* of the two mutually agreed limits. Each end should whenever possible be attempting to handle the maximum of the proposed limit from the other end as well (whether that handling be leaving them in TCP buffers or a bunch of RST_STREAM - either will work best under different situations) To use max-streams as the example we have the flows like so: Raising the limit: * RFC default: 100 * client sends SETTINGS 1000 - clients starts sending up to 1000 requests * server receives client SETTINGS 1000 - sends SETTINGS 2000 - starts sending up to 1000 responses * client receives server SETTINGS 2000 - client MAY ignore it and continue to send only 1000. - OR client may choose to increase itself and send another SETTINGS => no problems there. Either end may halt the increase whenever it wants. Lowering the limit: * RFC default: 100 * client sends SETTINGS 100 - clients starts sending up to 100 requests, say it sends 50. * server sends SETTINGS 20 - server starts sending up to 20 responses - server may RST_STREAM on 20-1000 streams, in some circumstances it can simply leave frames in the TCP buffers and wait for responses to free up resource. * client sent 50 requests, receives server SETTINGS streams:20 - client stops sending new requests until <20 streams are in use => server faces a problem of queued requests for a brief period. But only in the single case that one party is attempting to lower from the default value. We can specify that all implementations MUST be able to support that default or any higher limit that it offers. Lowering the limit can work in the same way in reverse for PUSH stream limits. Amos
Received on Monday, 7 October 2013 15:32:13 UTC