- From: Roy T. Fielding <fielding@gbiv.com>
- Date: Mon, 2 May 2016 12:06:50 -0700
- To: Martin Thomson <martin.thomson@gmail.com>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>
On May 1, 2016, at 7:26 PM, Martin Thomson <martin.thomson@gmail.com> wrote: > > An issue has come up in webpush that I think bears some discussion here. > > There is no flow control for server pushes that don't include data. > > As long as a server push contains no data, there is no way to > practically limit to how many pushes can be generated by the server. > We have several mechanisms that might have worked, but none of these > actually engage in this case: > > - Concurrent stream limit: PUSH_PROMISE causes the promised stream > to enter a reserved state, which is not counted toward the stream > limit. The HEADERS on that stream that completes the push will > immediately end the stream, so the concurrent streams count never > really increases. > > - Flow control: PUSH_PROMISE and HEADERS are the only messages that > are sent in this case. Flow control for the affected stream never > enters the picture, and connection-level flow control isn't touched > either because no message payloads are sent. > > - TCP receive window: As a measure of last resort, the TCP receive > window will eventually close. However, the best guidance we've given > implementations is to drain the TCP receive window as quickly as > possible to avoid head of line blocking and other stalling problems. > That means that a good implementation will leave this window as wide > open as possible. In that case, the only effective limit on the rate > of inbound pushes will be the bandwidth of the connection bottleneck. > > Given effective compression of the PUSH_PROMISE and HEADERS, the > number of actual pushes that could be generated is likely very high, > even if each one contains unique information. This is not likely to > be a problem for web browsing cases, but it could be a problem for > other applications using HTTP. > > The only mitigation we currently have is application-specific changes, > but I think that isn't ideal. If things like pushing 304 are to > become more widespread, then I think that we might need to do > something about this. Are we concerned about a server accidentally sending too many pushes, or deliberately attacking the client via too many pushes? I fail to see the point of an attack. Network access can be drowned with a valid response (e.g., a 4K movie on YouTube). Perhaps a cache could be attacked if each push promise results in a cache eviction before a new representation is received? I'd blame the cache. For accidental deluges, I think it is better to educate the implementers than build something complicated into the protocol. Sending data that isn't needed harms the service more than anyone else. I think it would be better to have a way for the client to signal that it will ignore a given push, for various reasons, as a way of providing feedback to the service to STFU. A stream of feedback seems more useful than limits. > Two options have been proposed: > > 1. A header field that limits the number of pushes in response to any > given request, maybe something that builds on Herve's push policy > work. > > 2. Explicit acknowledgment of each push, plus separate configuration > for the maximum number of unacknowledged pushes. This probably needs > to be an optional, negotiated feature. > > I'm interested in what people think here. Not (2) -- that's overkill. (1) seems a shame given that it won't prevent a server from sending pushes, and doesn't feel right given that a client has no idea how many pushes it might need. On the whole, I think this is addressing a theoretical problem in the absence of a good model. For example, what happens when h2 is being used as a channel for notifications? I would expect very few requests and an unlimited number of pushes. Unless we can think of a reason that browsers can't just discard the promises, I don't think we need to do anything here, even though a STFU frame would be amusing. ....Roy
Received on Monday, 2 May 2016 19:07:15 UTC