- From: Amos Jeffries <squid3@treenet.co.nz>
- Date: Sat, 24 Nov 2018 21:08:44 +1300
- To: ietf-http-wg@w3.org
On 24/11/18 11:11 am, Evert Pot wrote: > > > On 11/23/18 1:12 AM, Mark Nottingham wrote: >> Hi Evert, >> >> Just some personal thoughts. >> >> Although folks are generally not as optimistic about Push as they used to be, I think there's still some appetite for considering it for non-browsing use cases; as such, this is potentially interesting. >> >> However, it's not clear at all how it'd work to me. Push is effectively hop-by-hop; intermediaries need to make a decision to send it, and that decision needs to be informed. How will a generic intermediary (like a CDN) know what link relations like "item" and "author" are, and how to apply them to a particular connection? >> >> It's likely that a server (whether an intermediary or an origin) is going to have many, many potential representations of these types to send, so it'll need to have knowledge of the format to be able to get those links out and push the appropriate responses. That doesn't seem generic -- unless you require all of the potentially pushable links to appear in Link headers (which doesn't seem like it would scale very well). > > I potentially see this used in 2 ways: > > * Through link headers > * By sending links in response bodies, such as HTML, HAL and ATOM. > > The draft, as it's written it is agnostic about where the links appear. > > The origin server will generally know which resources to send based on > these links, and link-relationships appearing in Prefer-Push would only > apply to links for which the context uri is the request uri. > > In the case of Link headers, intermediaries could cache which links were > available and push optimistically. I agree though if links are in the > body, it's unlikely proxies will parse and push those. I think that's OK. > > I'm not entirely sure about the potential scale concern. Is the concern > mainly about the appearance of many Link headers? Intuitively this makes > sense to me. However, for links appearing in response bodies; usually > they are already exist there so we're not really sending any additional > bytes. Linked resources are not always fetched by Browsers. Either because they are already in the client-side cache, or because they are not necessary for the part(s) of the page are being displayed or otherwise used. That is a large source of extra PUSH bytes that can be avoided. > >> >> Pushing *all* of the resources -- or even a significant subset -- is likely to cause performance problems of its own, as there will be contention with any other requests made for bandwidth. This is one of the big things we've learned about push in the browsing use case, and I suspect it applies here as well. > > I think one of the use-cases we need to test is for example, a JSON > collection that contains "n" items. Each is represented as a resource, > and linked to the collection with an "item" link relationship. > > In the past each resource may have been 'embedded' in the response body > of the collection resource, but now we're pushing them. We'll need to > benchmark to find out for what value of "n" this starts breaking. I think it is important to distinguish what "the past" actually means. If by past you mean HTTP/1.0 world. Yes that world benefited from inlining content as described. If by past you mean HTTP/1.1 world. That world did not gain nearly as much benefit from inline objects. Often use of inline prevented mechanisms like If-* revalidation, pipelining and compression from achieving best bandwidth reductions. The delays waiting for objects earlier in a pipeline still gave a small edge-case argument for inlining some resources. Contrary to popular myth a lot of services do not actually benefit from inlining in a purely HTTP/1.1 world. HTTP/2 adds multiplexing at the frame level rather than message level so there is not even those pipeline delay cases existing. Even if one ignores PUSH completely inlining resources is a negative for performance in the HTTP/2 world. PUSH only adds way to further optimize some edge cases where the index object delivery is less important that the objects it references. So the sub-objects it references should start delivery immediately and prior to the index itself. > > Frankly, I have no idea even what order of magnitude "n" is in. My hope > is that once we know, we can determine a few best practices for this > feature. It's still possible that we'll find that it's just not possible > to get reasonable performance out of this, but we intend to find out. > AFAIK that 'n' is what the Browser people have been measuring and testing about these past few years. At lest for their type of traffic. IMHO that approach is a good one to follow for non-browser agents as well for their type of traffic before this kind of specification gets standardized as "The" way to do things. You may find that PUSH is actually a bad way to go, 'n' is too small to be much use, or just wildly different from your assumptions. > Anyway, I'm still curious to learn a bit more about the performance > problems about doing many pushes. Is there mainly an issue with having > too many parallel H2 streams? Could it be solved by limiting the number > of parallel pushes? > I'm not sure of "mainly" is the right word. There are definitely problems with doing a lot of H2 PUSH'es. From the same causes in general packet networking that it is a bad idea for TCP to have too many packets in-transit awaiting ACK. It causes memory and resource pressure on every hop of the network that traffic goes through. The more you do the more chance something somewhere breaks one of them and causes the whole set (or a subset "up to N") to enter a failure-recovery state. It is a fine tight-rope style balancing act that both endpoints are doing with only _guesses_ about what the other endpoint actually needs to receive. The whole complex issue of message priorities in HTTP/2, for force some messages to go faster or slower than others is in part to counter incorrect guessing. > If this has to do with the number of streams, I image the issue also > exist with having an similar number of parallel GET requests. But maybe > there's something specific to Push. > One key difference with parallel GET is that the server can be absolutely sure the client actually wants those objects. With PUSH the server is guessing and every wrong guess is a waste of bandwidth. Also, the client message rejecting/closing a PUSH stream is most delayed when bandwidth limits are slowing traffic down overall, least when bandwidth is plentiful and fast. So the impact/waste of PUSH is at its worst during the very times it is most important to avoid wasting bandwidth. Amos
Received on Saturday, 24 November 2018 08:09:26 UTC