Re: Starting HTTP/2.0 for HTTP - Upgrade

On 6/10/2013 9:14 a.m., William Chan (ι™ˆζ™Ίζ˜Œ) wrote:
> On Sat, Oct 5, 2013 at 5:06 AM, Salvatore Loreto 
> <salvatore.loreto@ericsson.com <mailto:salvatore.loreto@ericsson.com>> 
> wrote:
>
>
>     while implementing the 06 draft we have discovered that
>     when starting HTTP/2.0 for http with the Upgrade mechanism
>     the client sends twice (according to Section 3.2)
>
>     the first time it in the HTTP2-Settings header field and the second
>
>           GET /default.htm HTTP/1.1
>           Host:server.example.com  <http://server.example.com>
>           Connection: Upgrade, HTTP2-Settings
>           Upgrade: HTTP/2.0
>           HTTP2-Settings: <base64url encoding of HTTP/2.0 SETTINGS payload>
>
>
>     the second is after the reception of the 101 response
>
>          Upon receiving the 101 response, the client sends a
>         connection header (Section 3.5), which includes a SETTINGS frame.
>
>
> Good catch. Looks like we introduced this when adding the 
> HTTP2-Settings. I don't have a strong opinion here on how to fix (if 
> we want to fix at all) so I'll save my paint for other bikesheds.

I think it is a good idea to leave it in. At best the values are 
identical so there is no effect on the behaviour. At worst the client 
may actually want to change something once it receives the server SETTINGS.

A server receiving both should take the first one as the client initial 
SETTINGS, and the second one as an update. The client is allowed to send 
SETTINGS at any time. Even as the first frame after a previous SETTINGS.

>
>     Another thing that we think should be clarified during starting
>     HTTP/2.0 (common to both the scenario http and http2)
>     is related to the second to last paragraph of Section 3.5
>
>         To avoid unnecessary latency, clients are permitted to send
>         additional frames to the server immediately after sending the client
>         connection header, without waiting to receive the server connection
>         header.  It is important to note, however, that the server connection
>         header SETTINGS frame might include parameters that necessarily alter
>         how a client is expected to communicate with the server.  Upon
>         receiving the SETTINGS frame, the client is expected to honor any
>         parameters established.
>
>
>     in the paragraph it is not clear what happens (what should be the
>     default behavior) if
>     the server alter how a client is expected to communicate with the
>     server.
>     This ambiguity could lead to different server side implementations
>     and then to
>     unexpected behaviors
>     It would be better to specif, to avoid vulnerability in server
>     implementations, something like the following:
>     "if the client exceeds any imitations of the server before the
>     SETTINGS is understood by the client,
>     the server SHOULD/MUST send GOAAWAY."
>     an alternative to this proposal is that the server SHOULD enforce
>     the local SETTINGS.
>
>
> A peer is always going to enforce its known settings. If one side 
> chooses to update SETTINGS in the middle of the connection, the peer 
> will always be slightly out of sync. We should not treat SETTINGS 
> violations as connection errors. AFAICT, all defined settings have 
> reasonable fallback scenarios (that look to be unspecced, which we 
> maybe should fix). That's acceptable, although it's probably desirable 
> to avoid it when possible. That's why we've discussed stuff like 
> HTTP2-Settings and settings in ALPN, in order to eliminate those races 
> when possible. In the HTTP/2 Upgrade scenario, it seems to be that 
> it'd be desirable for the client to wait for the server's HTTP/2 
> connection header (not just the HTTP/1.X Upgrade response) before 
> starting to send any of its own frames, in order to prevent the race. 
> This should not incur any HTTP/2 level roundtrips, and hopefully not 
> any TCP/lower level roundtrips from CWND & what not.

This is starting to get into over-engineering.

In the common case the server will be delivering 101 followed 
immediately by SETTINGS. So the client receives both either together.

In the edge cases for working HTTP/2 there may be a delay. But this will 
likely be caused by the client holding off its own SETTINGS or network 
issues. In both those cases waiting for it is mandatory to avoid causing 
the possible network problems to get even worse.


There are flow control issues surrounding the case when each peer is 
trying to alter an existing connections capacity limits (for max-streams 
or anything else). There is a MUST in there somewhere about each client 
only sending the *lowest* of the two mutually agreed limits. Each end 
should whenever possible be attempting to handle the maximum of the 
proposed limit from the other end as well (whether that handling be 
leaving them in TCP buffers or a bunch of RST_STREAM - either will work 
best under different situations)

To use max-streams as the example we have the flows like so:

Raising the limit:

* RFC default: 100

* client sends SETTINGS 1000
- clients starts sending up to 1000 requests

* server receives client SETTINGS 1000
- sends SETTINGS 2000
- starts sending up to 1000 responses

* client receives server SETTINGS 2000
- client MAY ignore it and continue to send only 1000.
- OR client may choose to increase itself and send another SETTINGS

=> no problems there. Either end may halt the increase whenever it wants.


Lowering the limit:

* RFC default: 100

* client sends SETTINGS 100
- clients starts sending up to 100 requests, say it sends 50.

* server sends SETTINGS 20
- server starts sending up to 20 responses
- server may RST_STREAM on 20-1000 streams, in some circumstances it can 
simply leave frames in the TCP buffers and wait for responses to free up 
resource.

* client sent 50 requests, receives server SETTINGS streams:20
- client stops sending new requests until <20 streams are in use

=> server faces a problem of queued requests for a brief period. But 
only in the single case that one party is attempting to lower from the 
default value. We can specify that all implementations MUST be able to 
support that default or any higher limit that it offers.

Lowering the limit can work in the same way in reverse for PUSH stream 
limits.


Amos

Received on Sunday, 6 October 2013 03:13:23 UTC