W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Some thoughts on server push and client pull

From: Gabriel Montenegro <Gabriel.Montenegro@microsoft.com>
Date: Thu, 7 Jun 2012 01:30:47 +0000
To: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
CC: Matthew Cox <macox@microsoft.com>, Ivan Pashov <ivanpash@microsoft.com>, Osama Mazahir <OSAMAM@microsoft.com>, Rob Trace <Rob.Trace@microsoft.com>, Jonathan Silvera <jsilvera@microsoft.com>
Message-ID: <CA566BAEAD6B3F4E8B5C5C4F61710C1148040AF5@TK5EX14MBXW602.wingroup.windeploy.ntdev.microsoft.com>
Hi folks,

Amongst some collegues, we've been thinking about server push in HTTP 2.0 and have some ideas we'd like to share as a way to promote discussion on this subject

1. Overview
Server Push is an interesting idea that could provide some performance gains for HTTP 2.0 clients, however we don't believe it is core the HTTP 2.0 protocol. Server Push could result in servers sending unnecessary data to the client, introduces potential delays to avoid race conditions and is mostly relevant for browser applications. A less complex alternative that addresses some of these drawbacks while better aligning with legacy HTTP, "Smart Client Pull", is proposed below.

2. Issues with current Server Push in SPDY

We don't envision Server push as part of the base HTTP 2.0 protocol, but see it as a potentially interesting extension, as long as there is some way for the client to exert some control over when and how it is used. One fundamental requirement is for clients to be able to control "Server Push" behavior via a new opt-in <name TBD> header.  Servers MUST NOT push unrequested data to the client, unless the top level page request <name TBD> header is set to allow "Server Push".

Server Push does not require any validation prior to pushing data to the client, which could result in the server sending unnecessary data to clients that have some of the pushed resources stored in their cache.

Furthermore, "Server Push" introduces a race condition in which a client could start a new request for data that the server is in the process of pushing, effectively causing the same resource to be downloaded twice. SPDY addresses the race condition by not sending any data (headers are OK) for the top level page, until all of the SYN_STREAM for the dependencies it will push are sent:

"To minimize race conditions with the client, the SYN_STREAM for the pushed resources MUST be sent prior to sending any content which could allow the client to discover the pushed resource and request it."

We agree that SPDY's proposal is a good way to mitigate the race condition in Server Push without introducing significant complexity. Unfortunately mitigating the race condition in this manner prevents the server from sending data for the top level page. This could result in user-visible delays.  Whether or not the user will see a delay will depend on what messages (how many and how large) the server is pushing to the client.

3. Smart Client Pull alternative to Server Push

We would like to propose an alternative to Server Push for discussion. This alternative is closely aligned with existing standards and could even work for HTTP 1.1.

When a server receives an HTTP request for a top level page, the server will generate a list of resources needed to fully load the top level page. The server will send the optimal pre-fetch list to the client, via LINK headers, with a "prefetch" link relation type (defined in HTML5 per http://www.iana.org/assignments/link-relations/link-relations.xml).
The server SHOULD also include the corresponding cache validators for each resource in the pre-fetch list. An extension to the "prefetch" link relation type will be needed to allow cache validator data.

When a client receives data for a top level page, it will begin processing the top level page response, while simultaneously pre-fetching resources in the pre-fetch list that are not in the client cache or that are cached but invalid, as indicated by the cache validators included in the pre-fetch list. Servers SHOULD only include resources that block loading of the top level page in the optimal pre-fetch list.

This eliminates the need to block data for the top level request until all SYN_STREAM for pushed dependencies are sent, in order to avoid the race condition. The client would pre-fetch the resources in the optimal pre-fetch list, which introduces an extra RTT.  We should gather data to determine how much more significant this RTT is, when compared to the complexity of implementing server push and some of the benefits obtained by multiplexing new connections for the dependency resources.

Clients will control "Smart Client Pull" behavior via a new opt-in <name TBD> header. Servers MUST NOT push LINK header and associated cache validator data to clients, unless the top level page request <name TBD> header is set to allow "Smart Client Pull".

4. Other alternatives

These are other client-driven alternatives to "Server Push". However we need to capture additional data to determine the impact and cost of each:

a.            Bundling of HTTP resources: Administrators can package static server side resources needed to download a webpage as a single entity. A client would download all static dependencies by issuing a single HTTP request. However, this could result in the client receiving data which it already had in its cache.

b.            Combination of Smart Client Pull and Bundling of HTTP resources (a).

5.            Recommendation:

"Server Push" should not be a part of the core HTTP2.0 spec for the following reasons:

- "Server Push" is not core to defining the HTTP 2.0 transport.

- "Server Push" without optimizations can result in unnecessary data being sent to the client.

- "Server Push" even with optimizations, introduces delays to avoid race conditions that are not present in other solutions.

- "Server Push" is relevant primarily for browser applications and not for more general use cases.

6.            Areas for further discussion:

- How to send the "prefetch" list and its corresponding cache-validators in an efficient manner. These resources are URLs, which can be really long, causing us to transmit a additional data when using Smart Client Pull.

- Validation of cached resources included in the prefetch list will require additional CPU cycles.  However, it is no different from cache validation currently performed on cache resources. The tradeoff here is CPU load in order to avoid unnecessary download of resources already in the client's cache.

- Compare "Smart Server Hints" and "Bundling" to determine the impact of the features in isolation and combined.
Received on Thursday, 7 June 2012 01:31:27 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:00 UTC