W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2013

Re: Implementation Notes on Server Push

From: James M Snell <jasnell@gmail.com>
Date: Wed, 15 May 2013 14:50:24 -0700
Message-ID: <CABP7RbcGvo2nEAVj4mkZoGfAU0hW4suSfFE9rLv2UMW4btKgKg@mail.gmail.com>
To: William Chan (陈智昌) <willchan@chromium.org>
Cc: Patrick McManus <pmcmanus@mozilla.com>, HTTP Working Group <ietf-http-wg@w3.org>
On Wed, May 15, 2013 at 2:41 PM, William Chan (陈智昌)
<willchan@chromium.org> wrote:
[snip]
>>
>> Its pretty obvious that there are going to be a fair number of pushed
>> streams that never get used.  I think there are lots of scenarios where
>> that's true - but we can start with things like AdBlockPlus and GreaseMonkey
>> scripts where the view the client has of the dom is pretty different than
>> the server's view. And the savings of push is really just on the order of 1
>> RTT for the whole page load. So client implementations need to draw a
>> balance between the costs of buffering and transferring that data and the
>> RTT savings.
>
>
> SETTINGS_MAX_CONCURRENT_STREAMS is another option here.
>

Bounding the max concurrent open streams is not quite enough here...
for one, if we limit the number of concurrent streams without limiting
the amount of data pushed within those streams, developers could
simply end up resorting to inlining again... one stream for all their
js files, one stream for all their css, etc.

I, for one, am planning an experiment that uses a combination of DATA
frames and a modified version of a HEADERS frame as a way of replacing
the use of multipart mime. Using that approach, it would be rather
simple for me to work around the max concurrent streams limit by
simply sequencing out multiple resources in a single stream.

- James
Received on Wednesday, 15 May 2013 21:51:16 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:13 UTC