- From: Osama Mazahir <OSAMAM@microsoft.com>
- Date: Fri, 26 Sep 2014 23:16:02 +0000
- To: Jeff Pinner <jpinner@twitter.com>, Matthew Cox <macox@microsoft.com>
- CC: Martin Thomson <martin.thomson@gmail.com>, Greg Wilkins <gregw@intalio.com>, Michaela LaVan <mlavan@google.com>, Mark Nottingham <mnot@mnot.net>, Mike Bishop <Michael.Bishop@microsoft.com>, "HTTP Working Group" <ietf-http-wg@w3.org>
Doing this RST_STREAM_OK does not obviate heuristics. The peer (client) could have already queued a large amount of DATA into TCP. >>The difference with HTTP/1.1 is the server can signal it with a Connection: close header in the response and tear down the connection. If, after emitting the response, the server is no longer interested in the remainder of the request body then it can drain it (e.g. recv and dump) and do a graceful mutual HTTP/1.1 FIN exchange (or HTTP/2 END_STREAM exchange). That is what servers have been doing. If some threshold is exceeded (timeout, slowness, bytecount, etc) the server can take harsher reaction (HTTP/1.1 RST or HTTP/2 RST_STREAM). When the body is large enough for it to matter, some clients will start with a 100-continue to avoid getting into the drain situation (others may send a canary request). Client-servers applications that have intimate knowledge about each other and are using HTTP as a substrate could use chunked-encoding (over TLS) and 0-chunk-terminate (or really anything). As an implementer of a general-purpose HTTP implementation, I understand the motivation. But hacking RST_STREAM semantics is not good design. What is desired here is to emit some signal that indicates: "hey...you don't need to bother sending me any more body because it's irrelevant". This could be done using a bit flag ("TERMINAL") in the HEADERS frame (or make it a separate "TERMINAL" frame): A server MAY emit TERMINAL to indicate that it does not need the request body. A TERMINAL flag MUST NOT be sent by a client. A TERMINAL flag MUST NOT be sent in any state except half-closed-local. Upon receive, a client MAY prematurely END_STREAM the body it is sending. A sophisticated client can use TERMINAL flag receipt to stop sending more DATA and instead END_STREAM to bring things to a graceful close. Assuming the client is not already past the point-of-no-return. An HTTP/1.1 to HTTP/2.0 intermediary can switch the parser mode on the HTTP/1.1 side. It doesn’t even have to be a flag; it could be a new frame-type or an extension frame. It's really an optimization for clients-servers that are not using any of the standard approaches. -----Original Message----- From: Jeff Pinner [mailto:jpinner@twitter.com] Sent: Friday, September 26, 2014 11:00 AM To: Matthew Cox Cc: Martin Thomson; Greg Wilkins; Osama Mazahir; Michaela LaVan; Mark Nottingham; Mike Bishop; HTTP Working Group Subject: Re: RST_STREAM(OK) after an HTTP response > This is what I would push back on. Any buffered data at the time of receiving RST_STREAM should be allowed to be dropped on the floor. There should be no guarantee that it will flow up the stack if the other side sends a RST_STREAM. Why? This isn't TCP where you can receive a RST and have data buffered in the receive buffer. By the time you receive the RST_STREAM you have already read the END_STREAM flag. The data is received by your HTTP/2 layer in order. > Even if I did allow the buffered data to stay, I'd have to fail the send side up the stack, and then the whole stack would have to know that receives are still valid even during a send failure. > Isn't this true with HTTP/1.1 -- if I send you a response before you've finished sending request data your stack has to know that that response is valid. > To me the problem Jeff presented is an application layer problem. In his example the termination of the request is based off of the receipt of a response, but since the application doesn't adhere to that, he wants to force it by resetting the stream. What I think he should do is what most 1.1 stacks are doing now which is have a heuristic on how long you are willing to let the client keep sending data after you already told it that you weren't interested, and once it passes some threshold you reset it dropping everything on the floor. A "time-wait" heuristic isn't very helpful for interop. A server could have the valid time period be 1/2 RTT after it sends the reset frame and you still have the same issue. The difference with HTTP/1.1 is the server can signal it with a Connection: close header in the response and tear down the connection. With HTTP/2, tearing down the connection may be expensive if there are many streams over it, or impossible if it's a proxied connection.
Received on Friday, 26 September 2014 23:16:34 UTC