Re: RST_STREAM(OK) after an HTTP response

> This is what I would push back on.  Any buffered data at the time of receiving RST_STREAM should be allowed to be dropped on the floor.  There should be no guarantee that it will flow up the stack if the other side sends a RST_STREAM.

Why? This isn't TCP where you can receive a RST and have data buffered
in the receive buffer. By the time you receive the RST_STREAM you have
already read the END_STREAM flag. The data is received by your HTTP/2
layer in order.

> Even if I did allow the buffered data to stay, I'd have to fail the send side up the stack, and then the whole stack would have to know that receives are still valid even during a send failure.
>

Isn't this true with HTTP/1.1 -- if I send you a response before
you've finished sending request data your stack has to know that that
response is valid.

> To me the problem Jeff presented is an application layer problem.  In his example the termination of the request is based off of the receipt of a response, but since the application doesn't adhere to that, he wants to force it by resetting the stream.  What I think he should do is what most 1.1 stacks are doing now which is have a heuristic on how long you are willing to let the client keep sending data after you already told it that you weren't interested, and once it passes some threshold you reset it dropping everything on the floor.

A "time-wait" heuristic isn't very helpful for interop. A server could
have the valid time period be 1/2 RTT after it sends the reset frame
and you still have the same issue. The difference with HTTP/1.1 is the
server can signal it with a Connection: close header in the response
and tear down the connection. With HTTP/2, tearing down the connection
may be expensive if there are many streams over it, or impossible if
it's a proxied connection.

Received on Friday, 26 September 2014 18:00:55 UTC