- From: Scott Mitchell <scott.k.mitch1@gmail.com>
- Date: Fri, 20 Jan 2017 11:54:50 -0800
- To: Tom Bergan <tombergan@chromium.org>
- Cc: laike9m <laike9m@gmail.com>, Martin Thomson <martin.thomson@gmail.com>, Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <CAFn2buAj9hnuhihR1GWZtZ9_e5kZjZQyKog6BXfV7O5qxvsJgQ@mail.gmail.com>
On Fri, Jan 20, 2017 at 10:28 AM, Tom Bergan <tombergan@chromium.org> wrote: > IIUC, PUSH_PROMISEs don't count against the stream limit due to a > (hypothetical) scenario like the following: A server currently has 199 > streams open on a connection, with limit of 200 per connection, and the > client makes another request. Now the server has 200 streams open. The > server wants to push a resource. Since the PUSH_PROMISE does not count > against the stream limit, the server is allowed to send a PUSH_PROMISE, > which notifies the client that the resource will be pushed (the client does > not need to request it). > > I submit that this is a nearly-pointless optimization. Broadly speaking, > there are two reasons a server might want to push a resource: > > 1. The server speculates that the client will need that resource shortly. > If the pushed resource is not more important than basically all of the 200 > streams currently in flight, there is little reason to push that resource. > This is a speculative push, meaning the goal is to reduce latency, but we > can't reduce latency if the push will be queued behind 200 other streams > anyway -- we might as well wait for the client to make the request. > > 2. The server is participating in a notification protocol where the push > used to notify the client of some event. In this case, the PUSH_PROMISE > does not need to be sent immediately because the server is not speculating > about a future client request. It would be fine to queue the PUSH_PROMISE > in the server until one of the 200 concurrent streams is closed. > > Therefore, I propose the following convention: servers should count > PUSH_PROMISEs towards the concurrent stream limit. This doesn't > necessarily help clients, since a client cannot rely on every server to > implement this behavior, but overall it seems like a much simpler > situation. FWIW, this is what I have implemented in Go's HTTP/2 server: > https://github.com/golang/net/blob/77412025ac6f6821b4938edeb45af9 > 39de5cccec/http2/http2.go#L85 > > To clarify my original intention (given the spec has already been released) is to have the limit be "advisory". The goal would be to preserve existing behavior but also allow the client to inform the server of its limitations. The server would have the option to use that information, or ignore it for reserved streams. An important factor here is the end result for violating SETTINGS_MAX_CONCURRENT_STREAMS and the client resetting a PUSH_PROMISE is the same (stream error aka RST_STREAM). Although SETTINGS_MAX_CONCURRENT_STREAMS seems like a natural choice for this there is one potential challenge. Section 5.1.2 includes the language "Endpoints MUST NOT exceed the limit set by their peer". The end result of "sending too many push promises" and "violating SETTINGS_MAX_CONCURRENT_STREAMS" may be the same (a stream error), but counting pushed streams against SETTINGS_MAX_CONCURRENT_STREAMS may violate this MUST NOT clause for existing implementations which don't count pushed streams against SETTINGS_MAX_CONCURRENT_STREAMS. To be consistent the MUST NOT would have to be relaxed and clarified. https://tools.ietf.org/html/rfc7540#section-5.1.2 Endpoints MUST NOT exceed the limit set by their peer. An endpoint that receives a HEADERS frame that causes its advertised concurrent stream limit to be exceeded MUST treat this as a stream error (Section 5.4.2 <https://tools.ietf.org/html/rfc7540#section-5.4.2>) of type PROTOCOL_ERROR or REFUSED_STREAM. The choice of error code determines whether the endpoint wishes to enable automatic retry (see Section 8.1.4 <https://tools.ietf.org/html/rfc7540#section-8.1.4>) for details). > On Fri, Jan 20, 2017 at 9:01 AM, Scott Mitchell <scott.k.mitch1@gmail.com> > wrote: > >> >> >> On Fri, Jan 20, 2017 at 5:17 AM, laike9m <laike9m@gmail.com> wrote: >> >>> If you don’t include RESERVED streams in the count for >>> SETTINGS_MAX_CONCURRENT_STREAMS then how do you limit the amount of >>> RESERVED streams, and how does your peer know about this limit? I have >>> imposed an implementation specific metric in the past, but this seems less >>> preferable than relying on something in the RFC that the peer is aware of. >>> Either way having infinite of something doesn’t work in practice. >>> >>> As Martin has explained, H2 doesn’t limit the amount of RESERVED >>> streams, based on the notion that HEADERS are close to free. >>> >> >> "HEADERS are free" sounds like an over simplification which becomes more >> apparent as the concurrency of streams and connections grows. There are >> other provisions in the RFC to limit the amount of state consumed by >> HEADERS (SETTINGS_HEADER_TABLE_SIZE, SETTINGS_MAX_HEADER_LIST_SIZE). >> >> In addition to headers this may require additional state to be allocated >> for stream management. The specification also has mechanisms to limit state >> consumed by streams. >> >> It’s true that one trying to send infinite number of PUSH_PROMISEs to >>> client can cause problems, but 1. This only happens if the server is >>> malicious, and if it’s malicious,having a limit in the RFC won’t >>> prevent anything, and 2. Not counting PUSH_PROMISEs is a tradeoff for fast >>> delivery of PUSH_PROMISEs, which stops client from sending more requests. I >>> guess this is what Martin meant by “If you limit server push by applying a >>> stream limit, then you prevent it from being used in time for the client to >>> use it.” >>> >> (Forgot to reply to all :P) >>> >> >> Malicious actors is a concern and must be dealt with. However there may >> be proxy-like systems with large amounts of concurrency or other memory >> constraint systems that already control their resources but the peer's only >> mechanism to know about these limits is to try-then-fail. Assuming clients >> impose some limit to their state (infinite state isn't practical) then the >> problem of "the client won't accept this push" exists if the server knows >> about it before hand or not. Knowing about it before hand gives the server >> the ability to potentially prioritize which resources it wants to push or >> make other more informed decisions. >> >> >> >>> On Thu, Jan 19, 2017 at 9:21 AM, Scott Mitchell < >>> scott.k.mitch1@gmail.com> wrote: >>> >>>> >>>> >>>> On Tue, Jan 17, 2017 at 2:43 PM, Scott Mitchell < >>>> scott.k.mitch1@gmail.com> wrote: >>>> >>>>> From my perspective I would like to see two clarifications: >>>>> >>>>> 1. It is clear to me that PRIORITY doesn't impact state. >>>>> >>>> >>>> Just to clarify ... it is clear that a PRIORITY frame doesn't impact >>>> the state of the stream it is carrying priority information for. The >>>> impacts PRIORITY frames have on other streams is not clear due to the >>>> wording in section 5.1.1. >>>> >>>> >>>>> However Section 5.1.1 states "first use of a new stream identifier" >>>>> which makes no reference to stream state. If stream state is >>>>> important/implied here better to be specific about it. I don't think the >>>>> one-off example below this text is sufficient to convey the intended >>>>> implications of this statement. >>>>> >>>>> 2. Section 5.1.2 states "Streams in either of the 'reserved' states >>>>> do not count toward the stream limit." which seems to conflict with section >>>>> 8.2.2 "A client can use the SETTINGS_MAX_CONCURRENT_STREAMS setting >>>>> to limit the number of responses that can be concurrently pushed by a >>>>> server.". These two statements appear to contradict each other. Since >>>>> SETTINGS_MAX_CONCURRENT_STREAMS is really the only mechanism to limit >>>>> resources due to server push I'm assuming section 5.1.2 is overly >>>>> restrictive. >>>>> >>>>> >>>>> On Tue, Jan 17, 2017 at 2:27 PM, Martin Thomson < >>>>> martin.thomson@gmail.com> wrote: >>>>> >>>>>> On 18 January 2017 at 01:37, Tatsuhiro Tsujikawa < >>>>>> tatsuhiro.t@gmail.com> wrote: >>>>>> > If my understanding is correct, this only refers to the new stream >>>>>> ID used >>>>>> > by HEADERS, and PUSH_PROMISE frames which open or reserve streams. >>>>>> The >>>>>> > example text following that statement uses HEADERS which opens new >>>>>> stream. >>>>>> > PRIORITY frame does not change stream state, and there is no reason >>>>>> to close >>>>>> > all unused streams lower than bearing stream ID. That said, I >>>>>> agree that >>>>>> > this is not crystal clear in the document. In practice, this is >>>>>> probably >>>>>> > rather rare case. >>>>>> >>>>>> This is, I think, the expectation. >>>>>> >>>>>> I think that we probably want to clarify the point by explicitly >>>>>> saying that PRIORITY doesn't affect stream states. We say that it can >>>>>> be sent in any state, but we don't also mention that important point. >>>>>> Do people here agree that an erratum on this point is appropriate >>>>>> here? >>>>>> >>>>> >>>>> >>>> >>> >> >
Received on Friday, 20 January 2017 19:55:27 UTC