W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: Multi-GET, extreme compression?

From: Brian Pane <brianp@brianp.net>
Date: Mon, 18 Feb 2013 17:19:53 -0800
Message-ID: <CAAbTgTvMG5de+HYRiuxanajvNUxf+p5pg0TAuLgFfyNoa49USA@mail.gmail.com>
To: HTTP Working Group <ietf-http-wg@w3.org>
On Mon, Feb 18, 2013 at 1:21 PM, Phillip Hallam-Baker <hallam@gmail.com> wrote:
>
> On Mon, Feb 18, 2013 at 9:18 AM, Patrick McManus <pmcmanus@mozilla.com>
> wrote:
[...]
> I do not understand your argument here. Performance of MGET should be pretty
> much identical to delta encoding because it is essentially the same thing
> but without the complexity.

In some ways, MGET actually seems more complex.  Here's my rationale:

To avoid head-of-line blocking, a server or intermediary will need to
be able to return the requested resources out of order.  Consider the
simple case where the client MGETs five small, static resources.  The
request is processed by a CDN node, 10 msec away from the client, that
has all but the first of those resources in memory.  That cache has to
fetch the first resource from an origin server 100 msec away.  If MGET
uses in-order responses, the mean response time will be 3x worse with
MGET than it would have been with 5 separate GETs in parallel.

It's straightforward to do out-of-order responses if MGET works as
"MSYN_STREAM" - treat the MGET as shorthand for creating the right
number of streams and applying the request headers into each of those
streams.  But that only solves the header compression problem for
requests, not for responses.  If you want to reduce response header
transfer size too, you end up having to implement both MGET and header
compression.  And MGET would create some new error modes, especially
in combination with the recently proposed control frame continuation
mechanism.

Brian
Received on Tuesday, 19 February 2013 01:20:25 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 19 February 2013 01:20:32 GMT